-
Notifications
You must be signed in to change notification settings - Fork 4.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Is my GPU ram causing problem? I only have 8GB #281
Comments
nope, same problem, same error. i have a 3060ti 8gb VRAM and an eith gen core i7 with 32gb ram |
maybe something else is hogging your gpu memory? try something like |
does it specifically require 8GB or more. in that case me having only 8GB might just be breaking point of it. i think i should have invested in a 3060 with 12GB RAM |
It runs with some limitations even with very small VRAM like ~ 2 Gb and you can even toggle it to not use your GPU at all...but you have to know how to apply these switches. Your real problem might be this: The original Bark here has no WebUI or installer, which means you probably installed an outdated fork from somewhere. My best guess would be this one here https://github.com/Fictiverse/bark. However that branch wasn't updated for weeks and seems to be stale, so I'd suggest some other GUI Branches like these: Just pick your flavour... |
Can you post your code snippet and OS? i've seen issues with setting the env variables at a python level. |
I figured out a solution, load modules on demand rather than load them all at the same time, try this pull request: #531 |
Thanks. I will try that!
…On Sun, Feb 25, 2024 at 1:44 AM asterocean ***@***.***> wrote:
I figured out a solution, load modules on demand rather than load them all
at the same time, try this pull request: #531
<#531>
—
Reply to this email directly, view it on GitHub
<#281 (comment)>, or
unsubscribe
<https://github.com/notifications/unsubscribe-auth/A3IB2HSH3SR2VG3HI76TAW3YVI7JTAVCNFSM6AAAAAAX4327KSVHI2DSMVQWIX3LMV43OSLTON2WKQ3PNVWWK3TUHMYTSNRSGY2DAMZTGA>
.
You are receiving this because you authored the thread.Message ID:
***@***.***>
--
Tanvir Hafiz
CEO
Prohelica
|
Here is the error I get, I am no programmer, but it seems it cannot run it with 8GB VRAM. (might be wrng about it) anyway, here is the error i get from the console
Traceback (most recent call last):
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1021, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\Bark\Bark_WebUI\bark\UI.py", line 24, in start
audio_array = generate_audio(prompt, history_prompt=npz_names[voice])
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 107, in generate_audio
semantic_tokens = text_to_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 25, in text_to_semantic
x_semantic = generate_text_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 428, in generate_text_semantic
preload_models()
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 362, in preload_models
_ = load_model(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 310, in load_model
model = _load_model_f(ckpt_path, device)
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 275, in _load_model
model.to(device)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 7.30 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF
The text was updated successfully, but these errors were encountered: