This evaluation tools is modified from the official ICDAR2015 competition. The code is slightly modified to be compatible with python3 and curved text instances.
We provide some of the popular benchmarks, including ICDAR2013, ICDAR2015, Total-Text, and all of the ground-truthes are transformed into the requried format.
The default evaluation metric sets IoU constraint as 0.5. For MANGO which is without accurate text detection branch, The IoU constraint is set as 0.1.
Directly run
python script.py -g=gts/gt-icdar2013.zip -s=preds/mango_r50_ic13_none.zip -word_spotting=false -iou=0.1
will produce
num_gt, num_det: 917 1038
Origin:
det_recall: 0.9269 det_precision: 0.9626 det_hmean: 0.9444
spot_recall: 0.795 spot_precision: 0.8256 spot_hmean: 0.81
Go into the directory of each algorithm for detailed evaluation results.