-
Notifications
You must be signed in to change notification settings - Fork 103
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Set max tokens by prompt #255
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Reviewed everything up to 1d58efa in 1 minute and 20 seconds
More details
- Looked at
229
lines of code in7
files - Skipped
0
files when reviewing. - Skipped posting
5
drafted comments based on config settings.
1. graphiti_core/llm_client/config.py:17
- Draft comment:
TheDEFAULT_MAX_TOKENS
value is set to 1024 here, but other client files likeanthropic_client.py
,groq_client.py
, andopenai_client.py
have different default values forDEFAULT_MAX_TOKENS
. Consider consolidating these values to avoid inconsistencies. - Reason this comment was not posted:
Confidence changes required:80%
The change inDEFAULT_MAX_TOKENS
from 16384 to 1024 inconfig.py
is not reflected in theanthropic_client.py
,groq_client.py
, andopenai_client.py
files whereDEFAULT_MAX_TOKENS
is defined separately. This could lead to inconsistencies if the intention was to have a uniform default across all clients.
2. graphiti_core/llm_client/anthropic_client.py:65
- Draft comment:
Ensuremax_tokens
is notNone
before using it inmax_tokens=max_tokens or self.max_tokens
. This logic could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. - Reason this comment was not posted:
Confidence changes required:70%
Themax_tokens
parameter is being passed with a default value ofDEFAULT_MAX_TOKENS
, but the logicmax_tokens=max_tokens or self.max_tokens
could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. This should be handled to ensure the correct default is used.
3. graphiti_core/llm_client/groq_client.py:64
- Draft comment:
Ensuremax_tokens
is notNone
before using it inmax_tokens=max_tokens or self.max_tokens
. This logic could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. - Reason this comment was not posted:
Confidence changes required:70%
Themax_tokens
parameter is being passed with a default value ofDEFAULT_MAX_TOKENS
, but the logicmax_tokens=max_tokens or self.max_tokens
could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. This should be handled to ensure the correct default is used.
4. graphiti_core/llm_client/openai_client.py:108
- Draft comment:
Ensuremax_tokens
is notNone
before using it inmax_tokens=max_tokens or self.max_tokens
. This logic could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. - Reason this comment was not posted:
Confidence changes required:70%
Themax_tokens
parameter is being passed with a default value ofDEFAULT_MAX_TOKENS
, but the logicmax_tokens=max_tokens or self.max_tokens
could lead to unexpected behavior ifmax_tokens
is explicitly set toNone
. This should be handled to ensure the correct default is used.
5. examples/podcast/podcast_runner.py:59
- Draft comment:
Remove commented-out code if it's not needed to keep the codebase clean and maintainable. - Reason this comment was not posted:
Confidence changes required:80%
The commented-out code should be removed if not needed, as it violates the DRY principle and clutters the codebase.
Workflow ID: wflow_aupVXIxl0a9Hpcb1
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 329e103 in 22 seconds
More details
- Looked at
75
lines of code in3
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. graphiti_core/llm_client/openai_generic_client.py:114
- Draft comment:
Themax_tokens
parameter is not being used in_generate_response
. Replaceself.max_tokens
withmax_tokens
to use the dynamic value passed to the function. - Reason this comment was not posted:
Comment was not on a valid diff hunk.
2. examples/podcast/podcast_runner.py:56
- Draft comment:
Themain
function is doing multiple tasks: setting up logging, clearing data, building indices, and processing messages. Consider refactoring to separate these concerns into distinct functions. - Reason this comment was not posted:
Confidence changes required:70%
Themain
function inpodcast_runner.py
is performing multiple tasks, which violates the Single Responsibility Principle. It should be refactored to separate concerns.
Workflow ID: wflow_o5eFqZUoWsQ6nPR2
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 70e0f0a in 13 seconds
More details
- Looked at
19
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
2
drafted comments based on config settings.
1. graphiti_core/helpers.py:76
- Draft comment:
Consider specifying the return type fornormalize_l2
for clarity and consistency with the rest of the codebase. - Reason this comment was not posted:
Confidence changes required:50%
The function signature change innormalize_l2
is not consistent with the rest of the codebase. The return type should be specified for clarity and consistency.
2. graphiti_core/helpers.py:76
- Draft comment:
def normalize_l2(embedding: list[float]) -> list[float]:
- Reason this comment was not posted:
Confidence changes required:80%
The functionnormalize_l2
has a return type hint that is missing. It should specify the return type for clarity and consistency with Python typing standards.
Workflow ID: wflow_xTYD6CEaC6fyfk3U
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
👍 Looks good to me! Incremental review on 1083bbe in 13 seconds
More details
- Looked at
13
lines of code in1
files - Skipped
0
files when reviewing. - Skipped posting
1
drafted comments based on config settings.
1. Dockerfile:26
- Draft comment:
Ensure that using--only main --no-root
instead of--no-dev
does not omit any necessary dependencies for the server to run correctly. This change alters the scope of installed dependencies. - Reason this comment was not posted:
Confidence changes required:80%
The change in the Dockerfile from--no-dev
to--only main --no-root
is significant. It changes the scope of dependencies being installed, which could affect the runtime environment if any necessary dependencies are omitted. This should be verified against the project's requirements.
Workflow ID: wflow_aoBnuEF5LUNt9e6M
You can customize Ellipsis with 👍 / 👎 feedback, review rules, user-specific overrides, quiet
mode, and more.
Important
Set default max tokens to 1024 for LLM prompts, with 16384 for edge extraction, and updated Dockerfile and version.
max_tokens
reduced to 1024 inconfig.py
.edge_operations.py
hasmax_tokens
set to 16384.max_tokens
parameter to_generate_response()
andgenerate_response()
inclient.py
,anthropic_client.py
,groq_client.py
,openai_client.py
, andopenai_generic_client.py
.--only main --no-root
for Poetry installation.pyproject.toml
.This description was created by for 1083bbe. It will automatically update as commits are pushed.