Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Scores from epoch with best dev-scores: #35

Open
ghost opened this issue Jan 25, 2019 · 5 comments
Open

Scores from epoch with best dev-scores: #35

ghost opened this issue Jan 25, 2019 · 5 comments

Comments

@ghost
Copy link

ghost commented Jan 25, 2019

Hi,

I am just trying to understand what does that sentence mean, "Scores from epoch with best dev-scores:"?

It is reporting score of test and dev set for each epoch, then what does "best dev score" imply? Does it have to do with mini-batch?

@nreimers
Copy link
Member

Hi,
the development and test score is computed every epoch.

The system further remembers what the epoch with the best development score was. It shows than in every epoch also the dev & test score from the epoch that had so far the best development score.

If the current epoch has the best dev score, it shows the dev & test score from that epoch.

@ghost
Copy link
Author

ghost commented Jan 27, 2019

Hi thanks,

Are the actual predictions made by classifier written somewhere?

@nreimers
Copy link
Member

The actual predictions are not written to disk during training. However, when you safe the model you can load it and use it for predictions. The git contains several examples how to load the models. For example, see the RunModel_CoNLL_Format.py

If you would like to have the predictions during training, you would need to modify the BiLSTM.fit function.

@ghost
Copy link
Author

ghost commented Jan 28, 2019

So i need the function to write the predictions made for dev and test set for each epoch written to the respective files. Only BiLSTM.fit needs to be changed for that?

@nreimers
Copy link
Member

Yes. BiLSTM.fit predicts the labels for the dev/test set. You just need to add lines so that these predictions (and maybe the gold labels) are stored in a file.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant