You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Because my graph adjacency is large as (100k x 100k) nodes, I plan to use torch's SparseTensor for storing data. I noticed that you implemented a custom loss with a backward function. Is it possible to use SparseTensor and autograd?
The text was updated successfully, but these errors were encountered:
Hi! This repository is a bit old but I think you can do that. However, it is going to be slow as the loop is written in Python, if you can rewrite the backward in C/Cuda using the equations I have posted, it should be much faster.
As per my experiment, the most workload of the custom backward function is on getting the edge index from the sparse array, which is alpha_ind = (idx[0, :] == i).nonzero(). If we pre-calculate the alpha_ind for each node, that would be a good performance improvement.
Hi,
Because my graph adjacency is large as (100k x 100k) nodes, I plan to use torch's SparseTensor for storing data. I noticed that you implemented a custom loss with a backward function. Is it possible to use SparseTensor and autograd?
The text was updated successfully, but these errors were encountered: