Problem when trying to use tortoise-tts after following the exact setup #520
capybarapower
started this conversation in
General
Replies: 1 comment
-
[EDIT: Just found a solution that worked for me, i ran the anaconda promp as administrator ! ] Had the same problem, have you found any solution ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
(tortoise) PS C:\Users\zen\tortoise-tts> cd C:\Users\zen\tortoise-tts
(tortoise) PS C:\Users\zen\tortoise-tts> python tortoise/do_tts.py --text "Sugi pula fabi" --voice lain --preset fast
Downloading (…)lve/main/config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 2.11k/2.11k [00:00<00:00, 1.05MB/s]
C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\huggingface_hub\file_download.py:133: UserWarning:
huggingface_hub
cache-system uses symlinks by default to efficiently store duplicated files but your machine does not support them in C:\Users\zen.cache\huggingface\hub. Caching files will still work but in a degraded version that might require more space on your disk. This warning can be disabled by setting theHF_HUB_DISABLE_SYMLINKS_WARNING
environment variable. For more details, see https://huggingface.co/docs/huggingface_hub/how-to-cache#limitations.To support symlinks on Windows, you either need to activate Developer Mode or to run Python as an administrator. In order to see activate developer mode, see this article: https://docs.microsoft.com/en-us/windows/apps/get-started/enable-your-device-for-development
warnings.warn(message)
Downloading pytorch_model.bin: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.26G/1.26G [00:34<00:00, 37.1MB/s]
Downloading (…)rocessor_config.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 159/159 [00:00<00:00, 159kB/s]
Downloading (…)olve/main/vocab.json: 100%|████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 1.61k/1.61k [00:00<00:00, 805kB/s]
Downloading (…)okenizer_config.json: 100%|███████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 181/181 [00:00<00:00, 99.6kB/s]
Downloading (…)cial_tokens_map.json: 100%|█████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████████| 85.0/85.0 [00:00<00:00, 42.4kB/s]
Downloading autoregressive.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/main/.models/autoregressive.pth...
100% |########################################################################|
Done.
Downloading diffusion_decoder.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/main/.models/diffusion_decoder.pth...
100% |########################################################################|
Done.
Downloading clvp2.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/main/.models/clvp2.pth...
100% |########################################################################|
Done.
Downloading vocoder.pth from https://huggingface.co/jbetker/tortoise-tts-v2/resolve/main/.models/vocoder.pth...
100% |########################################################################|
Done.
C:\Users\zen\tortoise-tts\tortoise\utils\audio.py:17: WavFileWarning: Chunk (non-data) not understood, skipping it.
sampling_rate, data = read(full_path)
Traceback (most recent call last):
File "C:\Users\zen\tortoise-tts\tortoise\do_tts.py", line 37, in
gen, dbg_state = tts.tts_with_preset(args.text, k=args.candidates, voice_samples=voice_samples, conditioning_latents=conditioning_latents,
File "C:\Users\zen\tortoise-tts\tortoise\api.py", line 329, in tts_with_preset
return self.tts(text, **settings)
File "C:\Users\zen\tortoise-tts\tortoise\api.py", line 393, in tts
auto_conditioning, diffusion_conditioning, auto_conds, _ = self.get_conditioning_latents(voice_samples, return_mels=True)
File "C:\Users\zen\tortoise-tts\tortoise\api.py", line 274, in get_conditioning_latents
auto_conds.append(format_conditioning(vs, device=self.device))
File "C:\Users\zen\tortoise-tts\tortoise\api.py", line 114, in format_conditioning
mel_clip = TorchMelSpectrogram()(clip.unsqueeze(0)).squeeze(0)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\tortoise-2.4.2-py3.9.egg\tortoise\models\arch_util.py", line 323, in forward
mel = self.mel_stft(inp)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\torchaudio\transforms_transforms.py", line 651, in forward
mel_specgram = self.mel_scale(specgram)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\torch\nn\modules\module.py", line 1501, in _call_impl
return forward_call(*args, **kwargs)
File "C:\Users\zen\miniconda3\envs\tortoise\lib\site-packages\torchaudio\transforms_transforms.py", line 412, in forward
mel_specgram = torch.matmul(specgram.transpose(-1, -2), self.fb).transpose(-1, -2)
RuntimeError: CUDA error: CUBLAS_STATUS_NOT_INITIALIZED when calling
cublasCreate(handle)
Beta Was this translation helpful? Give feedback.
All reactions