You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
My input image is 1360x1814 and I get the following error:
File "/Users/user3/Documents/projects/TLC/basicsr/models/archs/mprnet_arch.py", line 192, in forward
x = x + y
~~^~~
RuntimeError: The size of tensor a (452) must match the size of tensor b (453) at non-singleton dimension 2
tensor's a shape torch.Size([1, 144, 452, 680])
tensor's b shape torch.Size([1, 144, 453, 680])
That happens on some i'th forward step. it is quite strange, that the mismatched happens, isnt it ?
The text was updated successfully, but these errors were encountered:
The size mismatch issue is caused by the downsampled features and upsampled features from the 'UNet-like' skip connection.
A toy example is:
(1) input feature x with spatial size 7,
(2) then downsampled by 2x (with padding) and get a downsampled feature x_down with spatial size 4,
(3) upsample the x_down by 2x and get upscaled feature x_up with spatial size 8,
(4) add x_up with x and meets spatial size mismatch issue (8 vs 7)
The solution is to check the size of the input image and pad the image if needed. For example, we can pad the x in the above example to 8. The key code is here.
Hello and thank you for you work.
My input image is 1360x1814 and I get the following error:
tensor's
a
shapetorch.Size([1, 144, 452, 680])
tensor's
b
shapetorch.Size([1, 144, 453, 680])
That happens on some i'th forward step. it is quite strange, that the mismatched happens, isnt it ?
The text was updated successfully, but these errors were encountered: