Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Error in executing usage.ipynb #92

Open
MadaraPremawardhana opened this issue Jan 21, 2023 · 2 comments
Open

Error in executing usage.ipynb #92

MadaraPremawardhana opened this issue Jan 21, 2023 · 2 comments

Comments

@MadaraPremawardhana
Copy link

I have gotten the following error when running usage.ipynb

Output exceeds the size limit. Open the full output data in a text editor

AttributeError Traceback (most recent call last)
Cell In[4], line 7
4 z = torch.argmax(z_logits, axis=1)
5 z = F.one_hot(z, num_classes=enc.vocab_size).permute(0, 3, 1, 2).float()
----> 7 x_stats = dec(z).float()
8 x_rec = unmap_pixels(torch.sigmoid(x_stats[:, :3]))
9 x_rec = T.ToPILImage(mode='RGB')(x_rec[0])

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
1191 # this function, and just call forward.
1192 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
1193 or _global_forward_hooks or _global_forward_pre_hooks):
-> 1194 return forward_call(*input, **kwargs)
1195 # Do not call functions when jit is used
1196 full_backward_hooks, non_full_backward_hooks = [], []

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\dall_e\decoder.py:94, in Decoder.forward(self, x)
91 if x.dtype != torch.float32:
92 raise ValueError('input must have dtype torch.float32')
---> 94 return self.blocks(x)

File c:\Users\HP\AppData\Local\Programs\Python\Python310\lib\site-packages\torch\nn\modules\module.py:1194, in Module._call_impl(self, *input, **kwargs)
1190 # If we don't have any hooks, we want to skip the rest of the logic in
...

1268 return modules[name]
-> 1269 raise AttributeError("'{}' object has no attribute '{}'".format(
1270 type(self).name, name))

AttributeError: 'Upsample' object has no attribute 'recompute_scale_factor'

@DarkApocalypse
Copy link

I got the same error. i tried to fix it like here:
#76 (comment)
But it don't work

@bitsnaps
Copy link

I got the same error using cpu as argument, then I tried different options (cuda:0, cuda) for torch.device(), they all throws the same exception RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cpu and cuda:0! (when checking argument for argument weight in method wrapper_CUDA___slow_conv2d_forward)
P.S. I used google's colab with GPU selected as runtime type, and torch.cuda.is_available() returns True.
Looks like the model downloaded in the usage notebook has been created using different device!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants