-
Notifications
You must be signed in to change notification settings - Fork 15
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add XLSum evaluation / unify eval script #12
base: sentence_retrieval_eval
Are you sure you want to change the base?
Add XLSum evaluation / unify eval script #12
Conversation
…tence_retrieval_eval Sentence retrieval eval
…cript, and changed the slurm running settings
…layer (instead of whole model), added early stopping,
…ce-workshop/multilingual-modeling into remotes/origin/sentence_retrieval_eval
Thanks Hailey! (Referring to #11) Will resolve this PR once Vassilina and I have finalized on our evaluation script on XNLI. Apologies for the delay. |
@haileyschoelkopf Can you help review b0a23c5? Thank you! |
Yes I can! I might only get to it tomorrow though |
Submitting a PR from fork because I may not have edit access to this repo.
In this PR: added
adapters_eval.py
, a script that can be used to evaluate on XLSum or XNLI based on the 'dataset' flag.Also working on adding deepspeed compatibility via Huggingface Trainer / command line.
TODO/needs checking:
compute_metrics
function could be wrong. I will try to check thisload_model
for setting adapters to train / adding adapters is correct.adapters_xnli_de.py
been dealt with?