Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

pytorch ops for element-wise division and product #7

Open
chenmagi opened this issue Sep 8, 2023 · 2 comments
Open

pytorch ops for element-wise division and product #7

chenmagi opened this issue Sep 8, 2023 · 2 comments

Comments

@chenmagi
Copy link

chenmagi commented Sep 8, 2023

Hi,

Because my graph adjacency is large as (100k x 100k) nodes, I plan to use torch's SparseTensor for storing data. I noticed that you implemented a custom loss with a backward function. Is it possible to use SparseTensor and autograd?

@saurabhdash
Copy link
Owner

Hi! This repository is a bit old but I think you can do that. However, it is going to be slow as the loop is written in Python, if you can rewrite the backward in C/Cuda using the equations I have posted, it should be much faster.

@chenmagi
Copy link
Author

Hi

As per my experiment, the most workload of the custom backward function is on getting the edge index from the sparse array, which is alpha_ind = (idx[0, :] == i).nonzero(). If we pre-calculate the alpha_ind for each node, that would be a good performance improvement.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants