Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Clarification on Context Length for Code Generation #2

Open
ramaneswaran opened this issue Dec 24, 2024 · 1 comment
Open

Clarification on Context Length for Code Generation #2

ramaneswaran opened this issue Dec 24, 2024 · 1 comment

Comments

@ramaneswaran
Copy link

Hi team,

Thank you for providing the benchmark and inference examples—they’ve been incredibly helpful.

I have a question regarding the context length used during generation. In inference/example.py, it seems that the context is not truncated and is used in its entirety, even in cases where it exceeds 16K tokens.

In my experiments, I’ve set the context length to 12K tokens, but I haven’t been able to generate any code that successfully passes the test cases.

Could you please clarify, what is the maximum context length used in your inference setup?

@shanchaoL
Copy link
Collaborator

Hi ramaneswaran,

Thank you for your question and for sharing your observations.

The maximum context length in our inference setup is 12K tokens. The example.py file serves as a demonstration of the inference process and doesn't enforce strict truncation logic.

During actual inference, we ensure that both the system prompt and the target function's description are always included in the context. If the total context exceeds the 12K token limit, we truncate other contexts to ensure that the system prompt and target function's description remain intact within the allowed token length.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants