Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducing the results #10

Open
shem-anton opened this issue Jun 12, 2019 · 3 comments
Open

Reproducing the results #10

shem-anton opened this issue Jun 12, 2019 · 3 comments

Comments

@shem-anton
Copy link

Hi! I tried to reproduced the results with CUDA 9 and pytorch 1.1. I encountered several difficulties that required me to slightly modify code. Namely there are these two problems:

  • RuntimeError: expand(torch.cuda.FloatTensor{[36, 1, 128, 128]}, size=[36, 128, 128]): the number of sizes provided (3) must be greater or equal to the number of dimensions in the tensor (4)
    at the lines
    output_masked[:,d,:,:] = input_mask[:,d,:,:].unsqueeze(1) * output
    target_masked[:,d,:,:] = input_mask[:,d,:,:].unsqueeze(1) * target
    I had to change them to
    output_masked[:,d,:,:] = (input_mask[:,d,:,:].unsqueeze(1) * output).squeeze(1)
    output_masked[:,d,:,:] = (input_mask[:,d,:,:].unsqueeze(1) * output).squeeze(1)

  • IndexError: invalid index of a 0-dim tensor. Use tensor.item() to convert a 0-dim tensor to a Python number
    at the lines
    print("===> Epoch[{}]({}/{}): Batch Dice: {:.4f}".format(epoch, i, len(dataloader), 1 - loss_dice.data[0]))
    print("===> Epoch[{}]({}/{}): G_Loss: {:.4f}".format(epoch, i, len(dataloader), loss_G.data[0]))
    print("===> Epoch[{}]({}/{}): D_Loss: {:.4f}".format(epoch, i, len(dataloader), loss_D.data[0]))
    I think it should be loss_dice.data or loss_dice.item() instead.

Could you please look at these? I think the issues might be caused by the updates in pytorch.

@jingz12
Copy link

jingz12 commented Jun 26, 2019

Hi, I am using cuda 10.0 and pytorch 1.0.1
I have the first error, too. And also used squeeze(1) to suppress it.
But I did not have the second one.

@YuanXue1993
Copy link
Owner

Thanks for pointing out these issues. I haven't got time to update the code for newer CUDA and pytorch version, but PR is welcomed!

@yuexiaheihei
Copy link

could you please tell your CUDA version and pytorch version

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

4 participants