You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When I use "multi-segment language detection" https://github.com/SYSTRAN/faster-whisper?tab=readme-ov-file#multi-segment-language-detection.
I found that it detects language based on accent rather than actual spoken speech.
Ex: I have an audio file, a speaker is Malaysian and he says English, if I use "native language detection" with the first 30 seconds of the audio file then it detects the speech is in English. But when I use "multi-segment language detection" then it detects the speech is in Malay.
reacted with thumbs up emoji reacted with thumbs down emoji reacted with laugh emoji reacted with hooray emoji reacted with confused emoji reacted with heart emoji reacted with rocket emoji reacted with eyes emoji
-
When I use "multi-segment language detection" https://github.com/SYSTRAN/faster-whisper?tab=readme-ov-file#multi-segment-language-detection.
I found that it detects language based on accent rather than actual spoken speech.
Ex: I have an audio file, a speaker is Malaysian and he says English, if I use "native language detection" with the first 30 seconds of the audio file then it detects the speech is in English. But when I use "multi-segment language detection" then it detects the speech is in Malay.
Does anyone have any idea on this issue? Thanks.
Beta Was this translation helpful? Give feedback.
All reactions