Replies: 2 comments
-
Hi, Following on from the above, I have started working on code to run non-tiled images through SAHI via an MMDetection model (trained on tiled imagery), convert the results into a form as required by MMDetection's confusion matrix code, and then outputting a final confusion matrix. Currently, the code runs without error, however the confusion matrix outputted is not correct. After a fair bit of debugging, I'm wondering if the format of the adapted results JSON from SAHI is correct. Any help with this is greatly appreciated!
The above code produces a confusion matrix which looks as follows: Based on the visualised results, the model performs much better than is displayed by the confusion matrix. |
Beta Was this translation helpful? Give feedback.
-
Update: I managed to fix the above code. The issue was the formatting of the bounding boxes, with SAHI exporting as [x_top_left, y_top_left, w, h] and my ground truth data stored as [x_top_left, y_top_left, x_bottom_right, y_bottom_right]. I assume when SAHI visualises the ground truth data this conversion is done on the fly as needed? Updated code for those who may come across this thread:
Produces as output: |
Beta Was this translation helpful? Give feedback.
-
Hi there,
I have an MMDetection model which I can load and run inference with SHAI. The model is trained on tiled imagery (e.g. 250x250, 0.5 overlap). I have ground truth data in COCO JSON format for both the tiled and non-tiled images. I know that it is possible to generate confusion matrices using MMDetection's confusion matrix script, but I was wondering if it was possible to generate a confusion matrix when the model is ran through SAHI when comparing to the non-tiled test set JSON? This would be a big help, as it would allow me to compare model performance with/without SAHI.
Thank you :)
Beta Was this translation helpful? Give feedback.
All reactions