Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Index [0] of grad tensor #35

Open
ahmed-fau opened this issue Jun 11, 2018 · 2 comments
Open

Index [0] of grad tensor #35

ahmed-fau opened this issue Jun 11, 2018 · 2 comments

Comments

@ahmed-fau
Copy link

ahmed-fau commented Jun 11, 2018

Hi,

I am a little confused about taking only the zero index of gradient tensor in the penalty function:

gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()), create_graph=True, retain_graph=True, only_inputs=True)[0]

Why is it not possible to take the whole grad tensor ?

Best

@bhargavajs07
Copy link

because autograd.grad takes in a sequence(tuple) of tensors (w.r.t which the gradient of ouput have to be computed as inputs) the return is also a sequence(tuple) of gradient tensors w.r.t each input tensor in the sequence. Here, since you are differentiating only w.r.t one input tensor, you use the [0] th grad tensor from the sequence.

@AlephZr
Copy link

AlephZr commented Dec 19, 2019

In fact, there is only one element in the tuple returned by the autograd.grad function, [0] just removes the tuple container and takes out the internal tensor information.

This is a reply from the author of the paper on the same question, and I want to help you.
https://github.com/igul222/improved_wgan_training/issues/34

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants