You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
because autograd.grad takes in a sequence(tuple) of tensors (w.r.t which the gradient of ouput have to be computed as inputs) the return is also a sequence(tuple) of gradient tensors w.r.t each input tensor in the sequence. Here, since you are differentiating only w.r.t one input tensor, you use the [0] th grad tensor from the sequence.
In fact, there is only one element in the tuple returned by the autograd.grad function, [0] just removes the tuple container and takes out the internal tensor information.
Hi,
I am a little confused about taking only the zero index of gradient tensor in the penalty function:
gradients = autograd.grad(outputs=disc_interpolates, inputs=interpolates, grad_outputs=torch.ones(disc_interpolates.size()), create_graph=True, retain_graph=True, only_inputs=True)[0]
Why is it not possible to take the whole grad tensor ?
Best
The text was updated successfully, but these errors were encountered: