-
Notifications
You must be signed in to change notification settings - Fork 3.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rate limit. #26
Comments
Thank you. We will check it out. |
I am hitting the rate limit error too. I was using my GPT_4 API key. I tried the 32K version and got a different error. No batch script, just running from from inside VsCode terminal/windows 11. Tracebacks below. GPT 4 Traceback[OpenAI_Usage_Info Receive] Traceback (most recent call last): GPT4 32K TRACEBACK"python run.py --task "write a me this example program" --name "Example Name" --model 'GPT_4_32K' Traceback (most recent call last): |
refer to this page for rate limits https://platform.openai.com/account/rate-limits |
As they said, this problem occurs when using the GPT4 model. |
Is this a joke? I understand what rate limits are. I am trying to use Open Interpreter with a gpt4 api key and I cannot because there is no functionality that allows for waiting the required amount of time and or working around the limit in some other way. |
FORK THE Repo Change the code to use v2 completion endpoint |
I get the same error. Works fine with GPT3.5 but if I try GPT4 it crashes to: |
I followed the example using Tenacity here, and I was able to limit the requests made to the GPT-4 endpoints. |
also experiencing this issue using GPT4. is there somewhere in the code where i can add a rate limit or adjust how requests are sent to the API so that i can run this and ensure rate limit isn't reached so my project can complete? i prefer to use GPT4 because 1. i have access and 2. it seems to perform better, particularly with coding, from my experience. |
Hi @tbels89, |
@sjetha i had a look but i'm not sure where in the program to put the tenacity. what file(s) did you modify to get this working? can you show an example of your implementation? i'm still pretty new to this and trying to figure it all out. |
Can some one show an example code or tell how to fix it? |
You can add sleep code in the request openai API. |
@sjetha thanks for the tenacity suggestion. i was able to implement and using gpt4. i reviewed the log and it did have some rate limit hits, but because of the retry, never killed the process and chatdev was able to complete the task (not exactly what i was hoping it would output, but at least not hitting rate limits). for everyone wondering, here's what i did: file: chatdev-main/camel/model_backend.py import: from tenacity import retry, wait_exponential under the OpenAIModel class, just above the run function, i added the following line of code: |
Why has this been closed? it never works for me - always errors for GPT 4 - I fought hard to get access :) |
quote your api_key in your code |
Still facing rate limit error with GPT-4. Any confirmed fix? |
For those doing a direct copy and paste of @tbels89 suggested modification, I believe the 'R' in @Retry should be lower case, i.e., @Retry(...) Edit: comments automatically capitalise the letter following '@' so '@ retry(...)' without space |
@GBurgardt Try adding in file: chatdev-main/camel/agent/role_playing.py The following at the top of the file
And the following
on top of this function
Line 235 (unless there was any change) If this doesn't work. Copy your error code into GPT-4 and ask it where you should add |
He would but he has hit his rate limit. |
Having the same issue here with gpt-4, not sure why this is closed. |
I think it was closed likely because there is an easy solution that involves modifying the code as outlined in other comments in this Issue. If someone wants to put a PR together than gives this work-around to all then that would be the next logical step. |
It will work ,you can try it |
Solved like this
|
@qianyouliang Can you add this as a PR? We'll see if these guys will approve it since it's a good addition to the codebase. |
client = OpenAI(api_key=settings.IA_API_KEY,) Error code: 429 - {'error': {'message': 'Rate limit reached for gpt-3.5-turbo in organization org-rxenTzjMkf2Po8dsexe410Mg on requests per min (RPM): Limit 3, Used 3, Requested 1. Please try again in 20s. Visit |
Recommend adding current limiting function.
This is error.
rate limit refer to https://github.com/geekan/MetaGPT/blob/main/metagpt/provider/openai_api.py
The text was updated successfully, but these errors were encountered: