Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Unexpected error: Required inputs (['state']) are missing from input feed (['input', 'h', 'c', 'sr']). #249

Closed
HuangMason320 opened this issue Jul 3, 2024 · 10 comments

Comments

@HuangMason320
Copy link

HuangMason320 commented Jul 3, 2024

I ran code below using WSL ubuntu in windows:

docker run -p 9090:9090 --runtime=nvidia --gpus all --entrypoint /bin/bash -it ghcr.io/collabora/whisperlive-tensorrt

# Build tiny.en engine
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en

# Run server with tiny.en
python3 run_server.py --port 9090 \
                      --backend tensorrt \
                      --trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en"

Below is the client code:

from whisper_live.client import TranscriptionClient
import sounddevice

client = TranscriptionClient(
  "localhost",
  9090,
  lang="en",
  translate=False,
  model="small",
  use_vad=False,
  save_output_recording=True,                         # Only used for microphone input, False by Default
  output_recording_filename="./output_recording.wav"  # Only used for microphone input
)
client()

and I got Unexpected error: Required inputs (['state']) are missing from input feed (['input', 'h', 'c', 'sr']). in server when the client side connects to the server.

Below is the server window
image

P.S. I'm currently using 4070 GPU

@makaveli10
Copy link
Collaborator

Should be closed by #247

@HuangMason320
Copy link
Author

HuangMason320 commented Jul 4, 2024

I hope my understanding is correct. I have made all the changes to #247, but I still can't get it to run successfully.
Besides, I also started a new Docker image. Is there anything else I need to change? Thanks for your answer.

#Delete 
def download(model_url="https://github.com/snakers4/silero-vad/raw/master/files/silero_vad.onnx"):
#Add
def download(model_url="https://github.com/snakers4/silero-vad/raw/v4.0/files/silero_vad.onnx"):
#Delete
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('requirements/server.txt', 'requirements/client.txt') }}-${{ github.run_id }}
#Add
key: ${{ runner.os }}-pip-${{ matrix.python-version }}-${{ hashFiles('requirements/server.txt', 'requirements/client.txt') }}

@makaveli10
Copy link
Collaborator

@HuangMason320 if using docker, clear your docker caches docker system prune
If running outside of docker remove rm -rf ~/.cache/whisper-live/

@HuangMason320
Copy link
Author

HuangMason320 commented Jul 4, 2024

Thanks for yor answering!

I'm using docker, so I use docker system prune to clear the container.

After that, I run the code again:

docker run -p 9090:9090 --runtime=nvidia --gpus all --entrypoint /bin/bash -it ghcr.io/collabora/whisperlive-tensorrt

# Build tiny.en engine
bash build_whisper_tensorrt.sh /app/TensorRT-LLM-examples small.en

# Run server with tiny.en
python3 run_server.py --port 9090 \
                      --backend tensorrt \
                      --trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en"

and the error is still exist.

But i found that the path i mentioned in the photo should be : https://github.com/snakers4/silero-vad/raw/v4.0/files/silero_vad.onnx instead of https://github.com/snakers4/silero-vad/raw/master/files/silero_vad.onnx

Am I correct?
image

If it correct, is there anything that I forgot to modify?
P.S. I've modified the path below, reclone github code and set numpy version<2
螢幕擷取畫面 2024-07-04 231018

@nullonesix
Copy link

hey i have the same issue and changing the download link along with docker system prune fails to resolve it:

root@f341103f07e9:/app# python3 run_server.py --port 9092 \
                      --backend tensorrt \
                      --trt_model_path "/app/TensorRT-LLM-examples/whisper/whisper_small_en"
[TensorRT-LLM] TensorRT-LLM version: 0.9.0
--2024-07-08 14:11:22--  https://github.com/snakers4/silero-vad/raw/master/files/silero_vad.onnx
Resolving github.com (github.com)... 140.82.113.4
Connecting to github.com (github.com)|140.82.113.4|:443... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://raw.githubusercontent.com/snakers4/silero-vad/master/files/silero_vad.onnx [following]
--2024-07-08 14:11:22--  https://raw.githubusercontent.com/snakers4/silero-vad/master/files/silero_vad.onnx
Resolving raw.githubusercontent.com (raw.githubusercontent.com)... 185.199.109.133, 185.199.110.133, 185.199.108.133, ...
Connecting to raw.githubusercontent.com (raw.githubusercontent.com)|185.199.109.133|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: 2313101 (2.2M) [application/octet-stream]
Saving to: ‘/root/.cache/whisper-live/silero_vad.onnx’

/root/.cache/whisper-live/sil 100%[=================================================>]   2.21M  --.-KB/s    in 0.02s

2024-07-08 14:11:22 (103 MB/s) - ‘/root/.cache/whisper-live/silero_vad.onnx’ saved [2313101/2313101]

/app/whisper_live/vad.py:141: UserWarning: The given NumPy array is not writable, and PyTorch does not support non-writable tensors. This means writing to this tensor will result in undefined behavior. You may want to copy the array to protect its data or make it writable before converting it to a tensor. This type of warning will be suppressed for the rest of this program. (Triggered internally at ../torch/csrc/utils/tensor_numpy.cpp:206.)
  speech_prob = self.model(torch.from_numpy(audio_frame), self.frame_rate).item()
[07/08/2024-14:11:26] Unexpected error: Required inputs (['state']) are missing from input feed (['input', 'h', 'c', 'sr']).

@HuangMason320
Copy link
Author

HuangMason320 commented Jul 9, 2024

@makaveli10 @nullonesix I've solved this problem, and the thing I changed is the vad.py in docker image. The vad.py in docker is still using model_url="https://github.com/snakers4/silero-vad/raw/master/files/silero_vad.onnx", which will cause the error. After I use vim to modify the vad.py in docker, it works.

Is it possible to change the vad.py file in docker image that published? so that the error will be prevented.

@HuangMason320
Copy link
Author

@makaveli10 sorry. like i mentioned before is it possible to republish a new version of docker image?
Thanks for the answering

@makaveli10
Copy link
Collaborator

@HuangMason320
It looks you have the docker image cached, remove your old docker image and run docker system prune to clear the cache. After that pull the newer image:

docker run -it --gpus all -p 9090:9090 ghcr.io/collabora/whisperlive-gpu:latest

I think the docker container is already updated with the merged PR #247 with the ci, just checked on my end and I dont see the issue.

Hopefully, you have it running.

@HuangMason320
Copy link
Author

is that image only using Faster-Whisper or can be use by both Faster-Whisper and TensorRT.

@makaveli10
Copy link
Collaborator

makaveli10 commented Sep 16, 2024

Both Faster-whisper and tensorrt have their own images, that being said the tensorrt image should allow you to run faster-whisper backend as well although its not tested. Refer to the readme docker section.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants