Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can this run on a 3080 card? i keep getting errors #146

Open
CharlesOkwuagwu opened this issue Nov 27, 2024 · 6 comments
Open

can this run on a 3080 card? i keep getting errors #146

CharlesOkwuagwu opened this issue Nov 27, 2024 · 6 comments

Comments

@CharlesOkwuagwu
Copy link

Please I keep getting this error:

I have a 3080, on windows 11.
is this sufficient to run the app?

image
Traceback (most recent call last):
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\gradio\queueing.py", line 624, in process_events
    response = await route_utils.call_process_api(
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\gradio\route_utils.py", line 323, in call_process_api
    output = await app.get_blocks().process_api(
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\gradio\blocks.py", line 2015, in process_api
    result = await self.call_function(
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\gradio\blocks.py", line 1562, in call_function
    prediction = await anyio.to_thread.run_sync(  # type: ignore
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\anyio\to_thread.py", line 56, in run_sync
    return await get_async_backend().run_sync_in_worker_thread(
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\anyio\_backends\_asyncio.py", line 2441, in run_sync_in_worker_thread
    return await future
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\anyio\_backends\_asyncio.py", line 943, in run
    result = context.run(func, *args)
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\gradio\utils.py", line 865, in wrapper
    response = f(*args, **kwargs)
  File "D:\OmniGen\app.py", line 27, in generate_image
    output = pipe(
  File "C:\ProgramData\miniconda3\envs\omnigen\lib\site-packages\torch\utils\_contextlib.py", line 116, in decorate_context
    return func(*args, **kwargs)
  File "D:\OmniGen\OmniGen\pipeline.py", line 284, in __call__
    samples = scheduler(latents, func, model_kwargs, use_kv_cache=use_kv_cache, offload_kv_cache=offload_kv_cache)
  File "D:\OmniGen\OmniGen\scheduler.py", line 158, in __call__
    cache = [OmniGenCache(num_tokens_for_img, offload_kv_cache) for _ in range(len(model_kwargs['input_ids']))] if use_kv_cache else None
  File "D:\OmniGen\OmniGen\scheduler.py", line 158, in <listcomp>
    cache = [OmniGenCache(num_tokens_for_img, offload_kv_cache) for _ in range(len(model_kwargs['input_ids']))] if use_kv_cache else None
  File "D:\OmniGen\OmniGen\scheduler.py", line 16, in __init__
    raise RuntimeError("OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!")
RuntimeError: OffloadedCache can only be used with a GPU. If there is no GPU, you need to set use_kv_cache=False, which will result in longer inference time!
@staoxiao
Copy link
Contributor

@CharlesOkwuagwu, this issue is because you haven't properly installed Torch. You can refer to the following command to install it: pip install torch==2.3.1+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu118

@CharlesOkwuagwu
Copy link
Author

@CharlesOkwuagwu, this issue is because you haven't properly installed Torch. You can refer to the following command to install it: pip install torch==2.3.1+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu118

works now

@Jun-Pal
Copy link

Jun-Pal commented Dec 1, 2024

I got the same error with the 1070ti card. please help me.
image

@CharlesOkwuagwu
Copy link
Author

I got the same error with the 1070ti card. please help me. image

I think it needs the exact:
pip install torch==2.3.1+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu118

I put this in a new conda environment, and it worked.

@Jun-Pal
Copy link

Jun-Pal commented Dec 2, 2024

I got the same error with the 1070ti card. please help me. image

I think it needs the exact: pip install torch==2.3.1+cu118 torchvision --extra-index-url https://download.pytorch.org/whl/cu118

I put this in a new conda environment, and it worked.

it still doesn't work. :(

@nitinmukesh
Copy link

@Jun-Pal

See if this guide helps
https://youtu.be/9ZXmXA2AJZ4

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants