Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Turbo-V3 #894

Open
brainer3220 opened this issue Oct 1, 2024 · 7 comments
Open

Turbo-V3 #894

brainer3220 opened this issue Oct 1, 2024 · 7 comments

Comments

@brainer3220
Copy link

New Whisper model Turbo3 is rolled out.

SYSTRAN/faster-whisper#1025

@brainer3220 brainer3220 changed the title TURBO-V3 Turbo-V3 Oct 1, 2024
@philpav
Copy link

philpav commented Oct 2, 2024

+1

@lemuriandezapada
Copy link

Or even normal large-v3 would be nice to integrate. That one doesn't work either

@brainer3220
Copy link
Author

brainer3220 commented Oct 2, 2024

Right. Original model

@lemuriandezapada

@jb892
Copy link

jb892 commented Oct 3, 2024

Hi @m-bain, any plan to support whisper-large-v3-turbo?

@lemuriandezapada
Copy link

@brainer3220 I don't know. I tried importing large-v3 instead of large-v2 and the transcription failed miserably

@wygoralves
Copy link

+1

@DaddyCodesAlot
Copy link

DaddyCodesAlot commented Oct 4, 2024

This project is largely dead, so don't expect many updates, the founder did it as a uni project and has since moved on. With that being said, you can use Whisper-Turbo by justing pulling the model from Huggingface like so:

whisper_model = whisperx.load_model("deepdml/faster-whisper-large-v3-turbo-ct2", device="cuda",download_root='models', vad_options={"vad_onset": 0, "vad_offset": 0},asr_options = asr_options)

I recommend migrating your code to Whisper-Faster, it's implemented most of the features from Whisper-x. If you need Diarization, migrate to RevAI's reverb model.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants