Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

d4data/bias-detection-model yielding large number of false positives #3

Open
AlejandroEsquivel opened this issue Nov 21, 2024 · 1 comment

Comments

@AlejandroEsquivel
Copy link

There was a recent change to the validator as the previous version was broken and unmaintained

As a result the underlying model was switched out in favour of d4data/bias-detection-model.
The new underlying model doesn't seem to classify adequately, albeit the sample size is negligible I suspect that benchmarks would support this claim

image image
@JosephCatrambone
Copy link
Contributor

For what it's worth, the underlying model is the same. DBias uses d4data/bias-detection-model internally. I suspect there's some kind of compatibility issue at play with how tokenization is working or maybe something else.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants