-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
An odd result: 37% is not a majority. #186
Comments
@sofer the algorithm is behaving as expected in this case. This is the result of the analysis on that picture: |
if you look at the value 'scores' that's the result of the analysis. So every picture is rated for all emotions that the algorithm can recognise. Then the highest value is selected as "correct" answer. |
@daymos That makes sense. One solution might be to not use photos below a certain threshold or certainty, say 60 or 70--or at least 50. I love how this one has no fewer than 4 emotions with a score of >20. That must be quite unusual. |
That picture is a thoughie! @SimonJStewart sourced the pics for us, I think it wasn't easy to get high confidence form the algorithm in the end. But am not sure how frequent they are. |
@SimonJStewart I think it's your call. |
I think we should leave in ambiguous results - we are working to the principle that there is no right / wrong answer, just divergence from the mean. It does look odd on the screen though. Wondering if this could be fixed with a different UI: Rather than display Emotion API was x% certain it was [top emotion] could we show something like:
|
So, if I thought the face was neutral and 37% of people thought it was sad, then aren't I in agreement with the majority of people? Something seems to have gone wrong with your algorithm in this case.
The text was updated successfully, but these errors were encountered: