-
Notifications
You must be signed in to change notification settings - Fork 2.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat: Add AnswerF1Evaluator
#7073
Conversation
Pull Request Test Coverage Report for Build 8360549906Warning: This coverage report may be inaccurate.This pull request's base commit is no longer the HEAD commit of its target branch. This means it includes changes from outside the original pull request, including, potentially, unrelated coverage changes.
Details
💛 - Coveralls |
916f55d
to
eb1a48c
Compare
Similar to my comment on the AnswerRecall PRs, we need the tokenized versions of answers to calculate F1 on answers. #7394 (review) |
Closing this as we're going in a different direction to calculate F1 scores. We're probably going to have a Component that doesn't take answers as inputs but the result of other components that calculate answer precision and recall. |
Related Issues
Proposed Changes:
Add
AnswerF1Evaluator
, a Component that can be used to calculate the F1 score metric given a list of questions, a list of expected answers for each question and the list of predicted answers for each question.How did you test it?
Added unit tests.
Notes for the reviewer
I didn't add the component in the package
__init__.py
on purpose to avoid conflicts with future PRs.When all the evaluators are done I'll update it.
Checklist
fix:
,feat:
,build:
,chore:
,ci:
,docs:
,style:
,refactor:
,perf:
,test:
.