Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

there is insufficient VRAM when running on rtx 4090 #8

Open
hola5156 opened this issue Nov 3, 2024 · 1 comment
Open

there is insufficient VRAM when running on rtx 4090 #8

hola5156 opened this issue Nov 3, 2024 · 1 comment

Comments

@hola5156
Copy link

hola5156 commented Nov 3, 2024

python infer_video.py -m gmflow+pervfi-vb -data input -fps 11
Testing on Dataset: input
Running VFI method : gmflow+pervfi-vb
TMP (temporary) Dir: /tmp/tmp62froo40
VIS (visualize) Dir: output
Building VFI model...
2024-11-03 07:10:30.219 | INFO | models.generators.PFlowVFI_Vb:init:262 - Parameter of decoder: 751875
Done
1
0%| | 0/1 [00:01<?, ?it/s]
Traceback (most recent call last):
File "/output/PerVFI-main/infer_video.py", line 110, in
outs = inferRGB(*inps) # e.g., [I2]
^^^^^^^^^^^^^^^

File "/output/PerVFI-main/infer_video.py", line 84, in inferRGB
tenOut = infer(*inputs, time=t)
^^^^^^^^^^^^^^^^^^^^^^

File "/output/PerVFI-main/build_models.py", line 39, in infer
pred = model.inference_rand_noise(I1, I2, heat=0.3, time=time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/output/PerVFI-main/models/pipeline.py", line 61, in inference_rand_noise
fflow, bflow = flows if flows is not None else self.compute_flow(img0, img1)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/output/PerVFI-main/models/flow_estimators/init.py", line 139, in infer
results_dict = model(
^^^^^^
File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/output/PerVFI-main/models/flow_estimators/gmflow/gmflow.py", line 136, in forward
feature0, feature1 = self.transformer(feature0, feature1, attn_num_splits=attn_splits)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/usr/local/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/output/PerVFI-main/models/flow_estimators/gmflow/transformer.py", line 290, in forward
shifted_window_attn_mask = generate_shift_window_attn_mask(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/output/PerVFI-main/models/flow_estimators/gmflow/transformer.py", line 41, in generate_shift_window_attn_mask
attn_mask = attn_mask.masked_fill(attn_mask != 0, float(-100.0)).masked_fill(attn_mask == 0, float(0.0))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
torch.cuda.OutOfMemoryError: CUDA out of memory. Tried to allocate 13.84 GiB. GPU 0 has a total capacty of 23.65 GiB of which 4.46 GiB is free. Process 301132 has 19.18 GiB memory in use. Of the allocated memory 18.39 GiB is allocated by PyTorch, and 344.78 MiB is reserved by PyTorch but unallocated. If reserved but unallocated memory is large try setting max_split_size_mb to avoid fragmentation. See documentation for Memory Management and PYTORCH_CUDA_ALLOC_CONF

@hola5156
Copy link
Author

hola5156 commented Nov 4, 2024

The error above occurred while processing 2160p video, while the error below occurred while processing 1080p video.

root@gpu-1075d5aec6736b6422097-1-5524:~/PerVFI# python infer_video.py -m gmflow+pervfi -data input -fps 60
Testing on Dataset: input
Running VFI method : gmflow+pervfi
TMP (temporary) Dir: /tmp/tmp5e4xc0ik
VIS (visualize) Dir: output
Building VFI model...
Done
1
0%| 0%| | 0/1 [00:02<?, ?it/s]
Traceback (most recent call last):
File "/root/PerVFI/infer_video.py", line 110, in
outs = inferRGB(*inps) # e.g., [I2]
^^^^^^^^^^^^^^^
File "/root/PerVFI/infer_video.py", line 84, in inferRGB
tenOut = infer(*inputs, time=t)
^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/build_models.py", line 39, in infer
pred = model.inference_rand_noise(I1, I2, heat=0.3, time=time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/utils/_contextlib.py", line 115, in decorate_context
return func(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/pipeline.py", line 64, in inference_rand_noise
pred, _ = self.netG(zs=zs, inps=conds, time=time, code="decode")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/PFlowVFI_V0.py", line 288, in forward
return self.decode(zs, inps, time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/PFlowVFI_V0.py", line 316, in decode
conds, smasks = self.get_cond(cond, time=time)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/PFlowVFI_V0.py", line 267, in get_cond
feas, bmasks, Ft2 = self.featurePyramid(
^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1518, in _wrapped_call_impl
return self._call_impl(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/nn/modules/module.py", line 1527, in _call_impl
return forward_call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/PFlowVFI_V0.py", line 79, in forward
Ft2 = -1 * softsplat(F2t, F2t, m2t.neg().clip(-20.0, 20.0), "soft")
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/softsplatnet/softsplat.py", line 293, in softsplat
tenOut = softsplat_func.apply(tenIn, tenFlow)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/autograd/function.py", line 539, in apply
return super().apply(*args, **kwargs) # type: ignore[misc]
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/miniconda/envs/pervfi/lib/python3.11/site-packages/torch/cuda/amp/autocast_mode.py", line 121, in decorate_fwd
return fwd(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^
File "/root/PerVFI/models/generators/softsplatnet/softsplat.py", line 330, in forward
cuda_launch(
File "cupy/_util.pyx", line 64, in cupy._util.memoize.decorator.ret
File "/root/PerVFI/models/generators/softsplatnet/softsplat.py", line 254, in cuda_launch
os.environ['CUDA_HOME'] = cupy.cuda.get_cuda_path()
~~~~~~~~~~^^^^^^^^^^^^^
File "", line 684, in setitem
File "", line 758, in encode
TypeError: str expected, not NoneType

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant