-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
CS224W - Bag of Tricks for Node Classification with GNN - LogE Loss #6
Conversation
lgtm!
|
Fair point! cut down the table quite a bit and linked to the rest. Still wanted to include at least a couple results for most methods to show when they are consistent/inconsistent across datasets. Could move the commands below but I think it is fairly readable now and that would make the description longer |
lgtm |
Closing because I can't figure out how to unlink with the PyG issue, this was just a draft for our internal review |
Implement$\text{log-}\epsilon$ loss functions
Part of #4 (TODO edit this), as described in “Bag of Tricks for Node Classification with Graph Neural Networks”, this non-convex loss is thought to be less sensitive to outliers, providing a maximal gradient at decision boundaries, but still significant signal for all misclassified examples.
Details
torch_geometric.nn.functional
seemed like a reasonable place for them to live. Happy to move to contrib as well.Benchmarks
benchmarks/citation
using Colab's T4s we see it can bring small but statistically significant gainsloge - nll
benchmarks/citation
were used with a batch norm inserted, though these are surely suboptimal settings