-
Notifications
You must be signed in to change notification settings - Fork 14
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cannot reproduce the recognition accuracy of certain expressions in the paper #8
Comments
I meet the same problem . Does anyone reproduce the result in the paper? |
I also meet the same problem . Is my data set error? I use RAF-DB data set to train, only get the accuracy of 86.44%. |
Anyone try this code? How about the resulut? Please help~ |
I use RAF-DB data set to train and get the accuracy of 86.25%, and use AffectNet data set to train and get the accuracy of 56.8%. |
Now the ms-celeb weight pre-training model cannot be downloaded, can you share it? |
请问你有了么,链接还是失效 |
Same question! |
you can check this link https://github.com/zyh-uaiaaaa/relative-uncertainty-learning |
I used the msceleb weights pre-training model parameters in the paper to train the model’s accuracy on a certain expression that is not as accurate as the recognition accuracy in the confusion matrix in the paper. The parameters used are the default parameters in the paper, so I want to know how In order to reproduce the recognition accuracy of each expression in the confusion matrix of the paper
The text was updated successfully, but these errors were encountered: