You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Right now, we don't detect the context length error properly:
Unexpected chunk: {
"error": {
"message": "This model's maximum context length is 8192 tokens. However, your messages resulted in 9074 tokens. Please reduce the length of the messages.",
"type": "invalid_request_error",
"param": "messages",
"code": "context_length_exceeded"
}
}
Since we retry this error, we end up accidentally causing a rate limit problem:
Unexpected chunk: {
"error": {
"message": "Rate limit reached for default-gpt-4 in organization org-XXXXX on tokens per min. Limit: 40000 / min. Please try again in 1ms. Contact us through our help center at help.openai.com if you continue to have issues.",
"type": "tokens",
"param": null,
"code": null
}
}
This error is not handled correctly either.
The text was updated successfully, but these errors were encountered:
Right now, we don't detect the context length error properly:
Since we retry this error, we end up accidentally causing a rate limit problem:
This error is not handled correctly either.
The text was updated successfully, but these errors were encountered: