-
Notifications
You must be signed in to change notification settings - Fork 529
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmarking GLUE tasks for in-context learning #707
Labels
question
Further information is requested
Comments
Update: Here is the yaml file we used:
And here is a sample example from the
Please let me know if you need any more details. Thanks, |
We also tried running the evaluation using lm-evaluation-harness. Here are the numbers with the two libraries: |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
❓ Question
I am trying to benchmark
llama-2-7b
on the GLUE benchmark for in-context learning. But the accuracy I get for MNLI (mismatched validation
) is 35.22 for both zero-shot and 8-shot. My questions are:InContextLearningMultipleChoiceTaskDataset
? Is there another recommend way to implement this?PS: Also ran evaluation for the
qqp
task: 36.82% for 0-shot and 63.09 for 8-shot.Any help would be greatly appreciated.
Thank you,
Additional context
The text was updated successfully, but these errors were encountered: