-
-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Turbo-V3 #894
Comments
+1 |
Or even normal large-v3 would be nice to integrate. That one doesn't work either |
Right. Original model |
Hi @m-bain, any plan to support |
@brainer3220 I don't know. I tried importing |
+1 |
This project is largely dead, so don't expect many updates, the founder did it as a uni project and has since moved on. With that being said, you can use Whisper-Turbo by justing pulling the model from Huggingface like so: whisper_model = whisperx.load_model("deepdml/faster-whisper-large-v3-turbo-ct2", device="cuda",download_root='models', vad_options={"vad_onset": 0, "vad_offset": 0},asr_options = asr_options) I recommend migrating your code to Whisper-Faster, it's implemented most of the features from Whisper-x. If you need Diarization, migrate to RevAI's reverb model. |
New Whisper model Turbo3 is rolled out.
SYSTRAN/faster-whisper#1025
The text was updated successfully, but these errors were encountered: