-
Notifications
You must be signed in to change notification settings - Fork 570
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
1 sec. extra latency added since v.0.0.57 #1319
Comments
The reason for this is because before this change it was possible for transcriptions to be received outside of I totally agree that the 1 second timeout is unfortunate, but see below. 😅
I was just in the middle of doing just that 😄 |
See #1321 |
Hi pipecat team. Thanks for all your amazing work!
In v.0.0.57 you added this in processors.aggregators.llm_response.py:
The unfortunate and undocumented behavior induced by aggregation_timeout=1 is that it adds an extra one sec. latency, which is completely unavoidable and unnecessary. I suspect this must be a missed side-effect of the intended functionality (which is not well documented)?
FYI We tested this with your own twilio-chatbot example using Silero and VADParams(onfidence=0.01, min_volume=0.01, start_secs=0.0001, stop_secs=0.0001) in v0.0.54, v0.0.55, v0.0.56, v0.0.57, and v0.0.58 -- running the exact same code (copied from v0.0.58).
We can of course just set aggregation_timeout=0 (which we did), but the natural way of instantiating the user context aggregator is something like this (e.g. in bot.py):
Thus, I would recommend setting the default value for aggregation_timeout to zero, or adding it as a keyword parameter in llm.create_context_aggregator().
Thanks!
The text was updated successfully, but these errors were encountered: