Skip to content

Releases: obss/sahi

v0.6.1

09 Jul 14:49
d0e15c3
Compare
Choose a tag to compare
  • refactor slice_coco script (#165)
  • make default for ignore_negative_samples, False (#166)

v0.6.0

06 Jul 14:53
3c2cc3c
Compare
Choose a tag to compare

enhancement

  • add coco_evaluation script, refactor coco_error_analysis script (#162)
    coco_evaluation.py script usage:
python scripts/coco_evaluation.py dataset.json results.json

will calculate coco evaluation and export them to given output folder directory.

If you want to specify mAP metric type, set it as --metric bbox mask.

If you want to also calculate classwise scores add --classwise argument.

If you want to specify max detections, set it as --proposal_nums 10 100 500.

If you want to specify a psecific IOU threshold, set it as --iou_thrs 0.5. Default includes 0.50:0.95 and 0.5 scores.

If you want to specify export directory, set it as --out_dir output/folder/directory .

coco_error_analysis.py script usage:

python scripts/coco_error_analysis.py dataset.json results.json

will calculate coco error plots and export them to given output folder directory.

If you want to specify mAP result type, set it as --types bbox mask.

If you want to export extra mAP bar plots and annotation area stats add --extraplots argument.

If you want to specify area regions, set it as --areas 1024 9216 10000000000.

If you want to specify export directory, set it as --out_dir output/folder/directory .

bugfixes

  • prevent empty bbox coco json creation (#164)
  • dont create mot info while when type='det' (#163)

breaking changes

  • refactor predict (#161)
    By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add --no_sliced_pred argument. If you don't want to perform standard prediction add --no_standard_pred argument.

v0.5.2

05 Jul 19:49
c16468f
Compare
Choose a tag to compare
  • fix negative bbox coord error (#160)

v0.5.1

03 Jul 01:45
0f3efb4
Compare
Choose a tag to compare
  • add predict_fiftyone script to perform sliced/standard inference over yolov5/mmdetection models and visualize the incorrect prediction over the fiftyone ui.

sahi_fiftyone

  • fix mot utils (#152)

v0.5.0

28 Jun 01:02
b1a2493
Compare
Choose a tag to compare
  • add check for image size in slice_image (#147)
  • refactor prediction output (#148)
  • fix slice_image in readme (#149)

refactor prediction output

# perform standard or sliced prediction
result = get_prediction(image, detection_model)
result = get_sliced_prediction(image, detection_model)

# export prediction visuals to "demo_data/"
result.export_visuals(export_dir="demo_data/")

# convert predictions to coco annotations
result.to_coco_annotations()

# convert predictions to coco predictions
result.to_coco_predictions(image_id=1)

# convert predictions to [imantics](https://github.com/jsbroks/imantics) annotation format
result.to_imantics_annotations()

# convert predictions to [fiftyone](https://github.com/voxel51/fiftyone) detection format
result.to_fiftyone_detections()
  • Check more in colab notebooks:

YOLOv5 + SAHI demo: Open In Colab

MMDetection + SAHI demo: Open In Colab

v0.4.8

23 Jun 10:42
559f4bd
Compare
Choose a tag to compare
  • update mot utils (#143)
  • add fiftyone utils (#144)

FiftyOne Utilities

Explore COCO dataset via FiftyOne app:
from sahi.utils.fiftyone import launch_fiftyone_app
# launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
# close fiftyone app:
session.close()
Convert predictions to FiftyOne detection:
from sahi import get_sliced_prediction
# perform sliced prediction
result = get_sliced_prediction(
    image,
    detection_model,
    slice_height = 256,
    slice_width = 256,
    overlap_height_ratio = 0.2,
    overlap_width_ratio = 0.2
)
# convert first object into fiftyone detection format
object_prediction = result["object_prediction_list"][0]
fiftyone_detection = object_prediction.to_fiftyone_detection(image_height=720, image_width=1280)

v0.4.6

14 Jun 23:37
596eb86
Compare
Choose a tag to compare

new feature

  • add more mot utils (#133)
MOT Challenge formatted ground truth dataset creation:
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add annotations to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
  • your MOT challenge formatted ground truth files are ready under mot_gt/sequence_name/ folder.
Advanced MOT Challenge formatted ground truth dataset creation:
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can omit automatic track id generation and directly provide track ids of annotations:
# create annotations with track ids:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)
MOT Challenge formatted tracker output creation:
  • import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
  • init video by providing video name:
mot_video = MotVideo(name="sequence_name")
  • init first frame:
mot_frame = MotFrame()
  • add tracker outputs to frame:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)
  • add frame to video:
mot_video.add_frame(mot_frame)
  • export in MOT challenge format:
mot_video.export(export_dir="mot_test", type="test")
  • your MOT challenge formatted ground truth files are ready as mot_test/sequence_name.txt.
Advanced MOT Challenge formatted tracker output creation:
  • you can enable tracker and directly provide object detector output:
# add object detector outputs:
mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

mot_frame.add_annotation(
  MotAnnotation(bbox=[x_min, y_min, width, height])
)

# add frame to video:
mot_video.add_frame(mot_frame)

# export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)
  • you can customize tracker while initializing mot video object:
tracker_params = {
  'max_distance_between_points': 30,
  'min_detection_threshold': 0,
  'hit_inertia_min': 10,
  'hit_inertia_max': 12,
  'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments

mot_video = MotVideo(tracker_kwargs=tracker_params)
  • you can overwrite the results into already present directory by adding exist_ok=True:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)

documentation

  • update coco docs (#134)
  • add colab links into readme (#135)

Check YOLOv5 + SAHI demo: Open In Colab

Check MMDetection + SAHI demo: Open In Colab

bug fixes

  • fix demo notebooks (#136)

v0.4.5

12 Jun 22:11
9ee4b00
Compare
Choose a tag to compare

enhancement

  • add colab demo support (#127)
  • add warning for image files without suffix (#129)
  • seperate mmdet/yolov5 utils (#130)

v0.4.4

10 Jun 08:59
ad1c364
Compare
Choose a tag to compare

new feature

documentation

  • update installation (#118)
  • add details for coco2yolov5 usage (#120)

bug fixes

  • fix typo (#117)
  • update coco2yolov5.py (#115)

breaking changes

  • drop python 3.6 support (#123)

v0.4.3

31 May 12:20
4642ede
Compare
Choose a tag to compare

refactorize postprocess (#109)

  • specify postprocess type as --postprocess_type UNIONMERGE or --postprocess_type NMS to be applied over sliced predictions
  • specify postprocess match metric as --match_metric IOS for intersection over smaller area or --match_metric IOU for intersection over union
  • specify postprocess match threshold as --match_thresh 0.5
  • add --class_agnostic argument to ignore category ids of the predictions during postprocess (merging/nms)

export visuals with gt (#107)

  • export visuals with predicted + gt annotations into visuals_with_gt folder when coco_file_path is provided
  • keep source folder structure when exporting results
  • add from_coco_annotation_dict classmethod to ObjectAnnotation
  • remove unused imports/classes/parameters
  • better typing hints