Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RuntimeError: Found dtype Long but expected Float #28

Open
Dyongh613 opened this issue Sep 30, 2022 · 2 comments
Open

RuntimeError: Found dtype Long but expected Float #28

Dyongh613 opened this issue Sep 30, 2022 · 2 comments

Comments

@Dyongh613
Copy link

File "train.py", line 122, in main
model_update(model, step, G_loss, optG_fs2)
File "train.py", line 77, in model_update
loss = (loss / grad_acc_step).backward()
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\tensor.py", line 221, in backward
torch.autograd.backward(self, gradient, retain_graph, create_graph)
File "C:\Users\12604\Anaconda3\envs\pytorch\lib\site-packages\torch\autograd_init_.py", line 132, in backward
allow_unreachable=True) # allow_unreachable flag
RuntimeError: Found dtype Long but expected Float

Hi@keonlee9420. This problem occurs when the loss function is back-propagating, how can I solve it?
This is the dtype of loss
image

@Dyongh613
Copy link
Author

Hi@keonlee9420, this problem has been solved!
The previously passed parameters are also of type float when I debug them, but I don't know why this problem occurs.
I try to add the loss = loss.type(torch.FloatTensor), then it can pass successfully

@Frei2
Copy link

Frei2 commented Mar 4, 2024

@Dyongh613 Can I ask that whether you train this model (portaspeech) under windows system?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants