You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
MeanAveragePrecision._get_coco_datasets() sets all labels to zero when self.average=="micro", but due to cocoeval.params.useCats==True you can have empty dts and gts in cocoeval.evaluate() if class 0 is not present in cocoeval.params.catIds.
Therefore no matter how bad or good the model's results actually are, all you've got in metric.compute() is just a bunch of -1s
The text was updated successfully, but these errors were encountered:
🐛 Bug
MeanAveragePrecision._get_coco_datasets()
sets all labels to zero whenself.average=="micro"
, but due tococoeval.params.useCats==True
you can have emptydts
andgts
incocoeval.evaluate()
if class 0 is not present incocoeval.params.catIds
.Therefore no matter how bad or good the model's results actually are, all you've got in
metric.compute()
is just a bunch of-1
sThe text was updated successfully, but these errors were encountered: