Nan values with kendall rank coeffient and batch training #1629
Unanswered
gudeh
asked this question in
Regression
Replies: 0 comments
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
Hi everyone.
I am using Kendall from torchmetrics to determine the score of training. I am using DGL library, which implements Graph Neural Networks with PyTorch.
My predictions and targets are 1d tensors which I send as input to Kendall. On some graphs I am getting NAN values on each and every epoch of training (although the MSE loss is reducing along epochs). This happens only when I use batch training or when using Kendall 'b' variant (the standard one). If I change to variant 'a' I can use batch training fine.
I ask for help for any possible explanation of why this is happening. I suspect there is something wrong in my implementation, relating to batches, for example.
Either way it would be nice to understand exactly which situations the Kendall function would return NaN values.
Beta Was this translation helpful? Give feedback.
All reactions