You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you so much for the amazing work. This really help me a lot throughout my research period.
However, May I know how to train the model without using the eval method? I just want the trained model from contrastive lost only. and later will be visualized in the TSNE. Therefore, accuracy doesn't matter to me. Currently, I'm working with my own dataset for graph classification as downstream task.
Initially, I have tried using mvgrl.train() but seems like there is no learning progress and it gave me the error as well. Please find the attaches below for more clarification.
The text was updated successfully, but these errors were encountered:
Hi @MarkSttc , thank you for your interest and question. You can definitely train encoders in contrastive manner for other tasks.
The simplest way is that you normally do the evaluation process. After the training and evaluation is finished, you can just take encoder_adj and encoder_diff to perform any other tasks since their parameters are already updated.
If you want to skip the evaluation, you can also do
Thank you so much for the amazing work. This really help me a lot throughout my research period.
However, May I know how to train the model without using the eval method? I just want the trained model from contrastive lost only. and later will be visualized in the TSNE. Therefore, accuracy doesn't matter to me. Currently, I'm working with my own dataset for graph classification as downstream task.
Initially, I have tried using mvgrl.train() but seems like there is no learning progress and it gave me the error as well. Please find the attaches below for more clarification.
The text was updated successfully, but these errors were encountered: