Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

An amazing bug that importing fast whisper conflicts with git #995

Open
WelkinYang opened this issue Sep 7, 2024 · 0 comments
Open

An amazing bug that importing fast whisper conflicts with git #995

WelkinYang opened this issue Sep 7, 2024 · 0 comments

Comments

@WelkinYang
Copy link

WelkinYang commented Sep 7, 2024

import os

import torch

from faster_whisper import WhisperModel

device = torch.device('cuda')

torch.set_num_threads(4)

local_file = 'model.pt'

if not os.path.isfile(local_file):

    torch.hub.download_url_to_file('https://models.silero.ai/denoise_models/sns_latest.jit',
                                   local_file)

model = torch.jit.load(local_file)

model.to(device)

a = torch.rand(1, 48000).to(device)

out = model(a) 

It's a minimal code to reproduce the bug. when remove "from faster_whisper import WhisperModel" there is not error exists, but when keep "from faster_whisper import WhisperModel", there is a error:

RuntimeError:
The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: The following operation failed in the TorchScript interpreter.
Traceback of TorchScript (most recent call last):
RuntimeError: default_program(30): error: name followed by "::" must be a class or namespace name
void kernel_0(IndexType totalElements, const TensorInfo<c10::complex,2> t0, const TensorInfo<float,1> t1, const TensorInfo<float,1> t2 ) {
^

default_program(30): error: expected an identifier
void kernel_0(IndexType totalElements, const TensorInfo<c10::complex,2> t0, const TensorInfo<float,1> t1, const TensorInfo<float,1> t2 ) {
^

default_program(30): error: invalid combination of type specifiers
void kernel_0(IndexType totalElements, const TensorInfo<c10::complex,2> t0, const TensorInfo<float,1> t1, const TensorInfo<float,1> t2 ) {
^

default_program(30): error: too few arguments for class template "TensorInfo"
void kernel_0(IndexType totalElements, const TensorInfo<c10::complex,2> t0, const TensorInfo<float,1> t1, const TensorInfo<float,1> t2 ) {
^

default_program(30): error: expected a type specifier
void kernel_0(IndexType totalElements, const TensorInfo<c10::complex,2> t0, const TensorInfo<float,1> t1, const TensorInfo<float,1> t2 ) {
^

default_program(34): error: name followed by "::" must be a class or namespace name
c10::complex t0_buf[4];
^

default_program(34): error: expected an identifier
c10::complex t0_buf[4];
^

default_program(34): error: expected a ";"
c10::complex t0_buf[4];
^

default_program(52): error: identifier "t0" is undefined
size_t t0_dimIndex1 = t0_linearIndex % t0.sizes[1];
^

default_program(73): error: identifier "t0_buf" is undefined
for(int i = 0; i<4; i++) t0_buf[i] = t0.data[t0_offset + i];
^

default_program(78): error: name followed by "::" must be a class or namespace name
c10::complex n0 = t0_buf[i];
^

default_program(78): error: expected an identifier
c10::complex n0 = t0_buf[i];
^

default_program(78): error: expected a ";"
c10::complex n0 = t0_buf[i];
^

default_program(80): error: identifier "n0" is undefined
float n2 = fabs(((float) n0));
^

default_program(100): error: identifier "t0" is undefined
size_t t0_dimIndex1 = t0_linearIndex % t0.sizes[1];
^

default_program(121): error: name followed by "::" must be a class or namespace name
c10::complex n0 = __ldg(&t0.data[t0_offset]);
^

default_program(121): error: expected an identifier
c10::complex n0 = __ldg(&t0.data[t0_offset]);
^

default_program(121): error: expected a ";"
c10::complex n0 = __ldg(&t0.data[t0_offset]);
^

default_program(123): error: identifier "n0" is undefined
float n2 = fabs(((float) n0));
^

19 errors detected in the compilation of "default_program".

however, when change the device from cuda to cpu, keep this line wil not lead to the error as well...
it's so confused

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant