Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Class sensitive scorer independent of COCO #1

Open
PaulHax opened this issue Jun 12, 2024 · 2 comments
Open

Class sensitive scorer independent of COCO #1

PaulHax opened this issue Jun 12, 2024 · 2 comments

Comments

@PaulHax
Copy link

PaulHax commented Jun 12, 2024

Hello! I'm helping with the NRTK Explorer app. @vicentebolea is working on generating a score comparing 2 images here: Kitware/nrtk-explorer#61

Sometimes the configured Object Detection Model outputs categories that are not categories in the COCO JSON. nrtk.impls.score_detections.coco_scorer errors, understandably, when that happens. nrtk.impls.score_detections.class_agnostic_pixelwise_iou_scorer works just fine. But...

Is there a way to get a score that takes into account the class/category of the annotations (independent of the COCO JSON)?

@bjrichardwebster
Copy link
Contributor

Hi Paul, sorry as you know things have been a bit chaotic lately so its taking me time to get back to you. We've got in the pipeline of work to add new Scorers to nrtk which would support more metrics; our intern will be starting on this soon. However, right now none of our built-in methods would do this I don't believe.

Could I get something on our work calendar and we can discuss further? I might be able to get our intern on it sooner.

@PaulHax
Copy link
Author

PaulHax commented Jun 20, 2024

In case its helpful someday, just dumping my notes on the scorer api I made while using it in NRTK Explorer.

Motivation: NRTK Explorer want to do these comparisons

  • ground truth - model predictions on original image
  • ground truth - model prediction on the perturbed/transformed image
  • model prediction on original image - model prediction on the perturbed/transformed image

Same type on actual and predicted parameters would be nice

NRTK Explorer app has quiet a bit of code to massage the app datastructures for score to support all the above comparision combos. Kitware/nrtk-explorer@9106189#diff-e2419bbe5bf4620af45160c9f8c0f759e403764ace61b5d285844303972abf0eR7-R94

Maybe dis:

Category = Hashable
Confidence = float
Annotation = Tuple[Dict[Category, Confidence], AxisAlignedBoundingBox]
AnnotationGroups = Sequence[Sequence[Annotation]]

Document parameter types

The predicted parameter type is Sequence[Sequence[Tuple[AxisAlignedBoundingBox, Dict[Hashable, float]]]] What should I put in Hashable part of the Dict? Cat ID. cat name, my own random ID? Vicente figured it out, but I was puzzled.

Don't require actual argument to have annotation for an image/group

Because maybe there is an image with no ground truth annotatins in the dataset?

Factor our image pair score(imageA, imageB) function

Sometimes I just want to score one image pair, not a whole batch (like in the lazy/async image prcessing pipeline in my dreams)

Image resolution indpendent example

The nrtk_pybsm image transformer outputs an image with a different resolution from the input image. The pixel wise comparision breaks when scoring object detection model predictions on the transformed image against the ground truth. Would be nice if there was an example that "resized" the transformed image annotations to match the original image before passing to score. Or should we be reszing the transformed image back to the original sized image?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants