-
Notifications
You must be signed in to change notification settings - Fork 1.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add support for large-v3 #559
Conversation
106ef98
to
7358507
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Shouldn't "large" be the v3 version?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Good point! Should large
be a shortcut to the latest large model?
Hello, I am currently getting this error after downloading the large-v3 model. Is there any way I can fix this? File "C:\Users\User\OneDrive\Desktop\GitHub-Projects\Proj-B\faster_whisper\transcribe.py", line 149, in init |
Try to upgrade the |
Thanks, that worked 👍 |
Do you intend on adding batch_transcribe? Because it seems like the whisper v3 on HF pipes allows transcribing multiple audio files at once... would be an amazing feature! |
Wish the maintainer merge soon. It delays so long. |
please please please |
Thanks for your PR, it seems a duplicate of this #578 |
In summary:
transcribe
andtranslate
were changed)Note: I'm not sure if the way I've implemented is the best one, feel free to give feedbacks. =) One of the things that can be optimized is the loading of the new tokenizer, which requires the
transformers
library when usinglarge-v3
.