Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Spellbound spams the endpoint when context length is exceeded. #44

Open
poteat opened this issue Jul 3, 2023 · 0 comments
Open

Spellbound spams the endpoint when context length is exceeded. #44

poteat opened this issue Jul 3, 2023 · 0 comments

Comments

@poteat
Copy link
Contributor

poteat commented Jul 3, 2023

Right now, we don't detect the context length error properly:

Unexpected chunk: {
  "error": {
    "message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 9074 tokens. Please reduce the length of the messages.",
    "type": "invalid_request_error",
    "param": "messages",
    "code": "context_length_exceeded"
  }
}

Since we retry this error, we end up accidentally causing a rate limit problem:

Unexpected chunk: {
    "error": {
        "message": "Rate limit reached for default-gpt-4 in organization org-XXXXX on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues.",
        "type": "tokens",
        "param": null,
        "code": null
    }
}

This error is not handled correctly either.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant