Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Fix: bugs when using opensource models #609

Closed
wants to merge 4 commits into from
Closed
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
1 change: 1 addition & 0 deletions graphrag/index/utils/json.py
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@

def clean_up_json(json_str: str):
"""Clean up json string."""
json_str = json_str[json_str.index('{'):]
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

it will raise error in global query

json_str = json_str[json_str.index('{'):]
ValueError: substring not found

Copy link
Author

@PaulSZH95 PaulSZH95 Aug 1, 2024

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

wrt to your error... no matter how well you write your json parser you'd still encounter error from time to time.

reason: your model isn't able to output json in a format you require it to.

So far the only solution is reiteration when errors are occured as error will still occur from time to time even if you finetune your gpt4 model. Of course, this is speaking from experience but so far i have not seen any model getting a 100% in humaneval benchs. p.s Langchain's approach is also reiteration, you probably experience it less cause the reiteration is hidden away unless you opt for verbosity.

Probably a good fix would be reiteration when faced with parsing error and not a fix on the parsing logic.

json_str = (
json_str.replace("\\n", "")
.replace("\n", "")
Expand Down
1 change: 1 addition & 0 deletions graphrag/query/llm/oai/embedding.py
Original file line number Diff line number Diff line change
Expand Up @@ -82,6 +82,7 @@ def embed(self, text: str, **kwargs: Any) -> list[float]:
chunk_lens = []
for chunk in token_chunks:
try:
chunk = self.token_encoder.decode(chunk)
embedding, chunk_len = self._embed_with_retry(chunk, **kwargs)
chunk_embeddings.append(embedding)
chunk_lens.append(chunk_len)
Expand Down