Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Is my GPU ram causing problem? I only have 8GB #281

Open
TanvirHafiz opened this issue May 10, 2023 · 8 comments
Open

Is my GPU ram causing problem? I only have 8GB #281

TanvirHafiz opened this issue May 10, 2023 · 8 comments

Comments

@TanvirHafiz
Copy link

Here is the error I get, I am no programmer, but it seems it cannot run it with 8GB VRAM. (might be wrng about it) anyway, here is the error i get from the console

Traceback (most recent call last):
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\routes.py", line 412, in run_predict
output = await app.get_blocks().process_api(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1299, in process_api
result = await self.call_function(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\gradio\blocks.py", line 1021, in call_function
prediction = await anyio.to_thread.run_sync(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio\to_thread.py", line 31, in run_sync
return await get_asynclib().run_sync_in_worker_thread(
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 937, in run_sync_in_worker_thread
return await future
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\anyio_backends_asyncio.py", line 867, in run
result = context.run(func, *args)
File "G:\Bark\Bark_WebUI\bark\UI.py", line 24, in start
audio_array = generate_audio(prompt, history_prompt=npz_names[voice])
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 107, in generate_audio
semantic_tokens = text_to_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\api.py", line 25, in text_to_semantic
x_semantic = generate_text_semantic(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 428, in generate_text_semantic
preload_models()
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 362, in preload_models
_ = load_model(
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 310, in load_model
model = _load_model_f(ckpt_path, device)
File "G:\Bark\Bark_WebUI\bark\bark\generation.py", line 275, in _load_model
model.to(device)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1145, in to
return self._apply(convert)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 797, in _apply
module._apply(fn)
[Previous line repeated 2 more times]
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 820, in _apply
param_applied = fn(param)
File "G:\Bark\Bark_WebUI\installer_files\env\lib\site-packages\torch\nn\modules\module.py", line 1143, in convert
return t.to(device, dtype if t.is_floating_point() or t.is_complex() else None, non_blocking)
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 12.00 MiB (GPU 0; 8.00 GiB total capacity; 7.30 GiB already allocated; 0 bytes free; 7.33 GiB reserved in total by PyTorch) If reserved memory is >> allocated memory try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@JonathanFly
Copy link
Contributor

You need to set the OFFLOAD options to true. If you're having trouble, edit generation.py here, add this:

image

@TanvirHafiz
Copy link
Author

nope, same problem, same error. i have a 3060ti 8gb VRAM and an eith gen core i7 with 32gb ram

@gkucsko
Copy link
Contributor

gkucsko commented May 11, 2023

maybe something else is hogging your gpu memory? try something like nvidia-smi in a terminal or use python: https://stackoverflow.com/questions/58216000/get-total-amount-of-free-gpu-memory-and-available-using-pytorch

@TanvirHafiz
Copy link
Author

does it specifically require 8GB or more. in that case me having only 8GB might just be breaking point of it. i think i should have invested in a 3060 with 12GB RAM

@C0untFloyd
Copy link

does it specifically require 8GB or more. in that case me having only 8GB might just be breaking point of it. i think i should have invested in a 3060 with 12GB RAM

It runs with some limitations even with very small VRAM like ~ 2 Gb and you can even toggle it to not use your GPU at all...but you have to know how to apply these switches.

Your real problem might be this:
File "G:\Bark\Bark_WebUI\installer_files\

The original Bark here has no WebUI or installer, which means you probably installed an outdated fork from somewhere. My best guess would be this one here https://github.com/Fictiverse/bark.
From that page:
grafik

However that branch wasn't updated for weeks and seems to be stale, so I'd suggest some other GUI Branches like these:

Just pick your flavour...

@fnrcum
Copy link

fnrcum commented May 30, 2023

Can you post your code snippet and OS? i've seen issues with setting the env variables at a python level.
I had the same issue and when setting the env vars at a python level but when i tried at console/system/pycharm level, it got fixed. My GPU is 8GB too
#315 (comment)

@asterocean
Copy link

I figured out a solution, load modules on demand rather than load them all at the same time, try this pull request: #531

@TanvirHafiz
Copy link
Author

TanvirHafiz commented Feb 27, 2024 via email

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

6 participants