Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can`t start server in windows #57

Closed
systemoutprintlnhelloworld opened this issue Oct 17, 2023 · 7 comments
Closed

can`t start server in windows #57

systemoutprintlnhelloworld opened this issue Oct 17, 2023 · 7 comments

Comments

@systemoutprintlnhelloworld
Copy link

systemoutprintlnhelloworld commented Oct 17, 2023

here is the log after this code
from whisper_live.server import TranscriptionServer
server = TranscriptionServer()
server.run("0.0.0.0", 9090)

Traceback (most recent call last):
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 187, in _run_module_as_main
mod_name, mod_spec, code = _get_module_details(mod_name, _Error)
File "C:\Program Files\WindowsApps\PythonSoftwareFoundation.Python.3.10_3.10.3056.0_x64__qbz5n2kfra8p0\lib\runpy.py", line 110, in _get_module_details
import(pkg_name)
File "D:\Code\whisper\WhisperLive-0.0.7\main.py", line 2, in
server = TranscriptionServer()
File "D:\Code\whisper\WhisperLive-0.0.7\whisper_live\server.py", line 38, in init
self.vad_model = VoiceActivityDetection()
File "D:\Code\whisper\WhisperLive-0.0.7\whisper_live\vad.py", line 21, in init
self.session = onnxruntime.InferenceSession(path, providers=['CPUExecutionProvider'], sess_options=opts)
File "C:\Users\25813\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 419, in init
self._create_inference_session(providers, provider_options, disabled_optimizers)
File "C:\Users\25813\AppData\Local\Packages\PythonSoftwareFoundation.Python.3.10_qbz5n2kfra8p0\LocalCache\local-packages\Python310\site-packages\onnxruntime\capi\onnxruntime_inference_collection.py", line 452, in _create_inference_session
sess = C.InferenceSession(session_options, self._model_path, True, self._read_config_from_model)
onnxruntime.capi.onnxruntime_pybind11_state.Fail: [ONNXRuntimeError] : 1 : FAIL : Load model from C:\Users\25813/.cache/whisper-live/silero_vad.onnx failed:C:\a_work\1\s\onnxruntime\core\graph\model.cc:134 onnxruntime::Model::Model ModelProto does not have a graph.

i have installed requirement from client.txt and server.txt,and i can not installed them from setup.sh,cause i am running server and client both in the same windwos system,as well as setup.py ,it error Perhaps my account does not have write access to this directory

and i don`t installed cuda,is this error linked with this or something else?
Lokk forward for your early reply

@makaveli10
Copy link
Collaborator

@systemoutprintlnhelloworld can you check if the onnx model exists
C:\Users\25813/.cache/whisper-live/silero_vad.onnx

@systemoutprintlnhelloworld
Copy link
Author

image
it`s an empty file

@makaveli10
Copy link
Collaborator

can you try changing the download path here

target_dir = os.path.expanduser("~/.cache/whisper-live/")

and see if that works

@systemoutprintlnhelloworld
Copy link
Author

thanks for your reply!
it`s a network problem
i just download it manually and put it in the right place

@systemoutprintlnhelloworld
Copy link
Author

here is another question:
why i open a video in broswer and click this extension to record after the server is runing in locahost:9090,the extension's console runs error like
The message port closed before a response was received
or
Cannot access contents of the page. Extension manifest must request permission to access the respective host.
like this picture
image
i try to run the extension in chrome either
but error as same as edge
by the way,the network column is empty so their is no request between B and S

could u please tell me how to fix it,or is their anything i ignore to present?

@makaveli10
Copy link
Collaborator

Just curious does these error cause issues in transcriptions?
Anyway, we fixed it in the latest PR #58 , just do a git pull and that should make these issues go away.

@rpurinton
Copy link

rpurinton commented Oct 22, 2023

FYI the reason this doesn't work on windows is subprocess.run(["wget", "-O"

workaround is download this: https://github.com/snakers4/silero-vad/raw/master/files/silero_vad.onnx

and place it manually in your home directory ~.cache\whisper-live
for example for me the folder is C:\Users\mir4.cache\whisper-live

this might not be the only thing necessary for windows setup though, it's just the first problem i ran into running run_server.py.

notes
i also had problems with the requirements server.txt complaining torch 1 is not available. i'm going to try with torch 2 which i already had installed.

be sure to allow python in windows firewall.

i was able to get the server running and confirmed 9090.

i created a run_client.py

from whisper_live.client import TranscriptionClient

client = TranscriptionClient(
    "127.0.0.1", "9090", is_multilingual=False, lang="en", translate=False
)
client()

i tried speaking and it did display my words, though it was slow. since i'm running on a VM and even though GPU/CUDA passthrough is enabled, i know whisper-faster runs faster in CPU mode on my machine with 4 threads on the tiny.en model.

i am going to dig into see what i need to do to update this stuff on the server. thanks for making this project its exactly what i needed for mine

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

3 participants