Getting higher latency (TTFB) even after hosting self-hosted deepgram STT model #1038
Replies: 3 comments
-
Thanks for asking your question. Please be sure to reply with as much detail as possible so the community can assist you efficiently. |
Beta Was this translation helpful? Give feedback.
-
Hey there! It looks like you haven't connected your GitHub account to your Deepgram account. You can do this at https://community.deepgram.com - being verified through this process will allow our team to help you in a much more streamlined fashion. |
Beta Was this translation helpful? Give feedback.
-
It looks like we're missing some important information to help debug your issue. Would you mind providing us with the following details in a reply?
|
Beta Was this translation helpful? Give feedback.
-
We're experiencing a 1.38s p80 latency (TTFB) with our self-hosted Deepgram setup for Speech-to-Text via WebSocket streaming. Below are the configurations we're using:
interim_results: True
endpointing: 300ms
model: nova-2-general
language: hi
punctuate: False
profanity_filter: True
smart_format: True
diarize: False
TTFB:- The time from when speech starts to when we receive the transcript.
Can someone please help here?
Beta Was this translation helpful? Give feedback.
All reactions