Skip to content
This repository has been archived by the owner on Jan 18, 2024. It is now read-only.

Evaluator

Vinay Sharma edited this page Aug 10, 2018 · 3 revisions

IMAGE ALT TEXT HERE

Above video demonstrates the evaluator tab of DetectionSuite evaluating Detector Generated results for COCO Val2017 dataset. After evaluation a summary of results is printed which contains both COCO mAP (mean average precision) metric and Pascal VOC metric. More detailed results are written in a CSV file with the name Evaluation Results.csv which contains class wise and overall results for the given dataset.

Also, the calculated metrics are very accurate and have been confirmed by running the same Ground Truth and Detections combination on COCO API and the results are identical.

The computed metrics take into account all the detections and area ranges on an image. Also, AP (Average Precision) is computed using 101 recall thresholds from 0.0 to 1.0 with a step of 0.01. And, mAP is computed using 10 IOU thresholds from 0.5 to 0.95 with a step go 0.05. These configurations are identical to the ones used for computation in the COCO API, and so are the results generated by DetectionSuite.

For more details please visit http://cocodataset.org/#detection-eval or https://github.com/cocodataset/cocoapi

Clone this wiki locally