You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Is your feature request related to a problem? Please describe.
Seems like a lot of the time, the agents timeout/exit due to exhausting the rate limits imposed by various LLMs supported by GPT-R.
Describe the solution you'd like
Perhaps we can use a global rate limiter by default or by configuration.
Describe alternatives you've considered
Using timeouts or having try/catch above the code that calls GPT-R. Very clunky as the entire process to generate the research and report is started again.
Additional context
Browsing through github issues, it seems like several others are having problems with rate limits imposed by LLM providers.
The text was updated successfully, but these errors were encountered:
For the global option, perhaps a timeout handler within the create_chat_completion helper function? That seems to be the helper function leveraged whenever an llm is called, and I assume it's awaited wherever it's leveraged.
Is your feature request related to a problem? Please describe.
Seems like a lot of the time, the agents timeout/exit due to exhausting the rate limits imposed by various LLMs supported by GPT-R.
Describe the solution you'd like
Perhaps we can use a global rate limiter by default or by configuration.
Describe alternatives you've considered
Using timeouts or having try/catch above the code that calls GPT-R. Very clunky as the entire process to generate the research and report is started again.
Additional context
Browsing through github issues, it seems like several others are having problems with rate limits imposed by LLM providers.
The text was updated successfully, but these errors were encountered: