Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

issue regarding metrics #14

Open
kocemir opened this issue Jun 16, 2024 · 0 comments
Open

issue regarding metrics #14

kocemir opened this issue Jun 16, 2024 · 0 comments

Comments

@kocemir
Copy link

kocemir commented Jun 16, 2024

Hey, I want to ask about the metrics that you used. you are using torch.metrics.functional.classification. but if I switch to sklearn, results differ significantly. Here is the code that I wrote. do you have any guidance regarding the reason?

**acc= accuracy_score(pred_labels.detach().cpu().numpy(),true_labels.detach().cpu().numpy())
f1= f1_score(pred_labels.detach().cpu().numpy(),true_labels.detach().cpu().numpy(),average="macro")
precision= precision_score(pred_labels.detach().cpu().numpy(),true_labels.detach().cpu().numpy(),average="macro")
recall= recall_score(pred_labels.detach().cpu().numpy(),true_labels.cpu().detach().numpy(),average="macro")**

Edit: I understood the reason. Since multiclass_accuracy function of the torchmetrics assumes "average=macro" by default. Why do you choose it, in particular for the accuracy, is it better to use average=macro in this case, or weighted for all of the metrics.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant