-
Notifications
You must be signed in to change notification settings - Fork 3
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Random initialization is better? #1
Comments
what is the command you use for random initialization? |
I modify the run_model_NewLook.py only removed the train related code, and set the --init parameter to None for random initialization. Just only do this. |
Thanks for pointing the problem out. I just fixed the problem. It should work correctly now. |
Thank you for your quickly fix. Except above, I have other questions: |
Problem description: I initialize the parameters of NewLook randomly (that is, set the --init or --init_checkpoint parameter to None in run_model_NewLook.py), and find that the effect of using random initialization directly is better than the effect presented in the paper (take FB15k-237 as example). Can you explain it?
Below is the results table:
Notes:
NLK refers to the results of your paper;
VAL refers to the valid set results after training 30000 steps, and use the saved model to predict.
TST refers to the test set results with the random initialization parameters for NewLook model.
We can see, the effect of TST is often better, Very confused about this result. Looking forward to your reply
The text was updated successfully, but these errors were encountered: