Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Oriented Bounding Box Tracker #1774

Open
1 task done
LilBabines opened this issue Dec 19, 2024 · 4 comments
Open
1 task done

Oriented Bounding Box Tracker #1774

LilBabines opened this issue Dec 19, 2024 · 4 comments
Labels
question Further information is requested

Comments

@LilBabines
Copy link

Search before asking

  • I have searched the Yolo Tracking issues and found no similar bug report.

Question

Hello,

I’m currently working on a project that requires object tracking with Oriented Bounding Boxes (OBB). Despite thorough research, I haven’t found any convincing implementation for a tracker specifically handling OBB, either in this repository or elsewhere. For instance, the yolo track model=yolov11m-obb command and the code from ultralytics/trackers don’t seem to clearly integrate OBB.

In this regard, I’d like to ask:

  • Do you think it would be beneficial to adapt a tracker to support OBB, for example, by adding a dimension to the KalmannFilter for angle prediction ?
  • Alternatively, would using centroid association function or adapting iou_batch be sufficient ?
  • Or would it be better to avoid the problem altogether with something like : update(dets = obb_to_x1y1x2y2(boxes))

Thank you in advance for your support and for the excellent work on this package! I’m looking forward to your feedback.

Best regards,

@LilBabines LilBabines added the question Further information is requested label Dec 19, 2024
@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 19, 2024

Do you think it would be beneficial to adapt a tracker to support OBB, for example, by adding a dimension to the Kalman Filter for angle prediction ?

Hi @LilBabines!

"Straightening" up the bounding boxes will give you worse association with any type of IoU. Think about the case of a top down view where two cars are side to side. With OBB there would be zero overlap but there may be if they are straightened. If your use-case is simple, it may be possible to simplify the problem by a obb_to_x1y1x2y2 function.
Notice also that x1x2y1y2 boxes are expected for feature extraction. So that would need to be adapted as well in case of OBB.

@LilBabines
Copy link
Author

Thank you very much for your quick response. As you mentioned, "straightening" up the bounding boxes is indeed risky, and I fully understand the challenges related to oriented versus non-oriented bounding boxes.

I’ve attached an image to illustrate my use case (quite similar to #1725), which I believe is very well-suited for oriented bounding boxes. I’m considering starting with OCSort for a more or less complete adaptation to handle oriented bounding boxes. I will also take a closer look later to consider feature extraction if needed.

Do you think this could be a valuable enhancement and interest the community?

frame_obb

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 19, 2024

Do you think this could be a valuable enhancement and interest the community?

Absolutely. OBB tracking would be a natural next step for this repo as well given the wider adoption of these object detection methods. I recommend you to trigger the load of a specific Kalman Filter designed for OBB when the detection input is one of (xyxyxyxy, xywhr) or whatever your OBB format is. So that we can port this to the rest of the tracking methods.

BB:  (x, y, x, y, conf, cls)
OBB: (x, y, w, h, r, conf, cls)

@mikel-brostrom
Copy link
Owner

mikel-brostrom commented Dec 19, 2024

OBB straightening:

import cv2
import math
import numpy as np

def extract_and_straighten_crops(image, cxs, cys, ws, hs, rs):
    """
    Extract and straighten multiple crops from an image given arrays of OBB parameters.

    Parameters
    ----------
    image : numpy.ndarray
        Input image (BGR or RGB).
    cxs, cys : numpy.ndarray
        Arrays of center coordinates for each OBB.
    ws, hs : numpy.ndarray
        Arrays of widths and heights for each OBB.
    rs : numpy.ndarray
        Array of rotation angles in radians for each OBB.

    Returns
    -------
    crops : list of numpy.ndarray
        Extracted, straightened crops for each OBB.
    """
    angles_degrees = np.degrees(rs)
    rows, cols = image.shape[:2]

    # Precompute rotation matrices (still a loop, but it's small and fast)
    rotation_matrices = [
        cv2.getRotationMatrix2D((float(cx), float(cy)), float(-angle), 1.0)
        for cx, cy, angle in zip(cxs, cys, angles_degrees)
    ]

    # Vectorized bounding box calculation
    half_ws = ws / 2
    half_hs = hs / 2

    x_mins = np.int32(cxs - half_ws)
    y_mins = np.int32(cys - half_hs)
    x_maxs = np.int32(cxs + half_ws)
    y_maxs = np.int32(cys + half_hs)

    # Clip coordinates to image boundaries
    x_mins = np.clip(x_mins, 0, cols)
    y_mins = np.clip(y_mins, 0, rows)
    x_maxs = np.clip(x_maxs, 0, cols)
    y_maxs = np.clip(y_maxs, 0, rows)

    crops = []
    for (M, x_min, y_min, x_max, y_max) in zip(rotation_matrices, x_mins, y_mins, x_maxs, y_maxs):
        rotated = cv2.warpAffine(
            image, M, (cols, rows),
            flags=cv2.INTER_LINEAR,
            borderMode=cv2.BORDER_CONSTANT, borderValue=(122, 122, 122)
        )

        crop = rotated[y_min:y_max, x_min:x_max]
        crops.append(crop)

    return crops

if __name__ == "__main__":
    # Example usage:
    image = cv2.imread("bus.jpg")

    cxs = np.array([740, 500, 300])
    cys = np.array([636, 400, 200])
    ws = np.array([138, 80, 100])
    hs = np.array([483, 60, 120])
    r_degrees = np.array([45, 30, -20])
    rs = np.radians(r_degrees)

    crops = extract_and_straighten_crops(image, cxs, cys, ws, hs, rs)

    for i, crop in enumerate(crops):
        cv2.imshow(f"Straightened Crop {i}", crop)
        cv2.waitKey(0)
    cv2.destroyAllWindows()

Could be used for feature extraction later on 🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested
Projects
None yet
Development

No branches or pull requests

2 participants