Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Input type (torch.cuda.FloatTensor) and weight type (torch.FloatTensor) should be the same #15

Open
quantumcode-martin opened this issue Oct 25, 2022 · 2 comments

Comments

@quantumcode-martin
Copy link

I get this error when running the Training Loop.

I loaded the state like so:
netG.load_state_dict(torch.load("./checkpoints/trained_netG_original.pth"))

And the error happens here:
errG = BCE_loss(generated_pred, cartoon_labels) + content_loss(generated_data, real_data)

Here is the full log

RuntimeError                              Traceback (most recent call last)
Cell In [7], line 79
     77 print(type(generated_pred))
     78 print(type(cartoon_labels))
---> 79 errG = BCE_loss(generated_pred, cartoon_labels.to(device)) + content_loss(generated_data, real_data)
     81 # Calculate gradients for G
     82 errG.backward()

File e:\sylvain-chomet-GAN\env\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
   1126 # If we don't have any hooks, we want to skip the rest of the logic in
   1127 # this function, and just call forward.
   1128 if not (self._backward_hooks or self._forward_hooks or self._forward_pre_hooks or _global_backward_hooks
   1129         or _global_forward_hooks or _global_forward_pre_hooks):
-> 1130     return forward_call(*input, **kwargs)
   1131 # Do not call functions when jit is used
   1132 full_backward_hooks, non_full_backward_hooks = [], []

File e:\sylvain-chomet-GAN\utils\loss.py:40, in ContentLoss.forward(self, x1, x2)
     39 def forward(self, x1, x2):
---> 40     x1 = self.perception(x1)
     41     x2 = self.perception(x2)
     43     return self.omega * self.base_loss(x1, x2)

File e:\sylvain-chomet-GAN\env\lib\site-packages\torch\nn\modules\module.py:1130, in Module._call_impl(self, *input, **kwargs)
...
    452                     _pair(0), self.dilation, self.groups)
--> 453 return F.conv2d(input, weight, bias, self.stride,
    454                 self.padding, self.dilation, self.groups)
@quantumcode-martin
Copy link
Author

Precision:
It seems the error is triggered by the call to content_loss.

@quantumcode-martin
Copy link
Author

Managed to run the train.py script.
Still having those issues with the experiment.ipynb.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant