-
Notifications
You must be signed in to change notification settings - Fork 190
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add tests for Whisper static pipeline #1250
base: master
Are you sure you want to change the base?
Conversation
a1bab59
to
48cb438
Compare
With transformers 4.46.3 version encoder model have different dynamic shape for input_features, fix for StaticWhisperPipeline - PR #1293 |
e697a90
to
3bab16c
Compare
@pytest.mark.parametrize("model_descr", get_whisper_models_list(tiny_only=True)) | ||
@pytest.mark.parametrize("test_sample", | ||
[ | ||
# *get_samples_from_dataset(language="fr", length=2), # 1/2 failed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is failed? Do we have ticket for this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For one test (with Spanish language), there's mismatch between expected and actual output (looks like language is not detected correctly)
expected: Habritan aguas poco profundas y lo cosas.
actual_out: Habt ihr da noch was poco perfundes und lohosen?
For one test (with French language), there's an error:
RuntimeError: Check '*roi_end <= *max_dim' failed at src\inference\src\dev\make_tensor.cpp:34
I will create tickets for found fails.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Created?
d9fe208
to
2fdac12
Compare
2fdac12
to
0e11d54
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It seems like everything can be covered by test_static_whisper_generation_compare_with_cpu
with different inputs (@pytest.mark.parametrize
)
"test_sample", get_samples_from_dataset(language="de", length=3) | ||
) | ||
@pytest.mark.precommit | ||
def test_static_whisper_language_de(model_descr, test_sample): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What does it actually check? How it's different from test_static_whisper_autodetect
?
"test_sample", get_samples_from_dataset(language="fr", length=3) | ||
) | ||
@pytest.mark.precommit | ||
def test_static_whisper_language_fr(model_descr, test_sample): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Same question, how it's different from test_static_whisper_autodetect
No description provided.