Releases: obss/sahi
v0.6.1
v0.6.0
enhancement
- add coco_evaluation script, refactor coco_error_analysis script (#162)
coco_evaluation.py
script usage:
python scripts/coco_evaluation.py dataset.json results.json
will calculate coco evaluation and export them to given output folder directory.
If you want to specify mAP metric type, set it as --metric bbox mask
.
If you want to also calculate classwise scores add --classwise
argument.
If you want to specify max detections, set it as --proposal_nums 10 100 500
.
If you want to specify a psecific IOU threshold, set it as --iou_thrs 0.5
. Default includes 0.50:0.95
and 0.5
scores.
If you want to specify export directory, set it as --out_dir output/folder/directory
.
coco_error_analysis.py
script usage:
python scripts/coco_error_analysis.py dataset.json results.json
will calculate coco error plots and export them to given output folder directory.
If you want to specify mAP result type, set it as --types bbox mask
.
If you want to export extra mAP bar plots and annotation area stats add --extraplots
argument.
If you want to specify area regions, set it as --areas 1024 9216 10000000000
.
If you want to specify export directory, set it as --out_dir output/folder/directory
.
bugfixes
breaking changes
- refactor predict (#161)
By default, scripts apply both standard and sliced prediction (multi-stage inference). If you don't want to perform sliced prediction add--no_sliced_pred
argument. If you don't want to perform standard prediction add--no_standard_pred
argument.
v0.5.2
v0.5.1
v0.5.0
- add check for image size in slice_image (#147)
- refactor prediction output (#148)
- fix slice_image in readme (#149)
refactor prediction output
# perform standard or sliced prediction
result = get_prediction(image, detection_model)
result = get_sliced_prediction(image, detection_model)
# export prediction visuals to "demo_data/"
result.export_visuals(export_dir="demo_data/")
# convert predictions to coco annotations
result.to_coco_annotations()
# convert predictions to coco predictions
result.to_coco_predictions(image_id=1)
# convert predictions to [imantics](https://github.com/jsbroks/imantics) annotation format
result.to_imantics_annotations()
# convert predictions to [fiftyone](https://github.com/voxel51/fiftyone) detection format
result.to_fiftyone_detections()
- Check more in colab notebooks:
v0.4.8
FiftyOne Utilities
Explore COCO dataset via FiftyOne app:
from sahi.utils.fiftyone import launch_fiftyone_app
# launch fiftyone app:
session = launch_fiftyone_app(coco_image_dir, coco_json_path)
# close fiftyone app:
session.close()
Convert predictions to FiftyOne detection:
from sahi import get_sliced_prediction
# perform sliced prediction
result = get_sliced_prediction(
image,
detection_model,
slice_height = 256,
slice_width = 256,
overlap_height_ratio = 0.2,
overlap_width_ratio = 0.2
)
# convert first object into fiftyone detection format
object_prediction = result["object_prediction_list"][0]
fiftyone_detection = object_prediction.to_fiftyone_detection(image_height=720, image_width=1280)
v0.4.6
new feature
- add more mot utils (#133)
MOT Challenge formatted ground truth dataset creation:
- import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
- init video:
mot_video = MotVideo(name="sequence_name")
- init first frame:
mot_frame = MotFrame()
- add annotations to frame:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
- add frame to video:
mot_video.add_frame(mot_frame)
- export in MOT challenge format:
mot_video.export(export_dir="mot_gt", type="gt")
- your MOT challenge formatted ground truth files are ready under
mot_gt/sequence_name/
folder.
Advanced MOT Challenge formatted ground truth dataset creation:
- you can customize tracker while initializing mot video object:
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments
mot_video = MotVideo(tracker_kwargs=tracker_params)
- you can omit automatic track id generation and directly provide track ids of annotations:
# create annotations with track ids:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)
# add frame to video:
mot_video.add_frame(mot_frame)
# export in MOT challenge format without automatic track id generation:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=False)
- you can overwrite the results into already present directory by adding
exist_ok=True
:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)
MOT Challenge formatted tracker output creation:
- import required classes:
from sahi.utils.mot import MotAnnotation, MotFrame, MotVideo
- init video by providing video name:
mot_video = MotVideo(name="sequence_name")
- init first frame:
mot_frame = MotFrame()
- add tracker outputs to frame:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=1)
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height], track_id=2)
)
- add frame to video:
mot_video.add_frame(mot_frame)
- export in MOT challenge format:
mot_video.export(export_dir="mot_test", type="test")
- your MOT challenge formatted ground truth files are ready as
mot_test/sequence_name.txt
.
Advanced MOT Challenge formatted tracker output creation:
- you can enable tracker and directly provide object detector output:
# add object detector outputs:
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
mot_frame.add_annotation(
MotAnnotation(bbox=[x_min, y_min, width, height])
)
# add frame to video:
mot_video.add_frame(mot_frame)
# export in MOT challenge format by applying a kalman based tracker:
mot_video.export(export_dir="mot_gt", type="gt", use_tracker=True)
- you can customize tracker while initializing mot video object:
tracker_params = {
'max_distance_between_points': 30,
'min_detection_threshold': 0,
'hit_inertia_min': 10,
'hit_inertia_max': 12,
'point_transience': 4,
}
# for details: https://github.com/tryolabs/norfair/tree/master/docs#arguments
mot_video = MotVideo(tracker_kwargs=tracker_params)
- you can overwrite the results into already present directory by adding
exist_ok=True
:
mot_video.export(export_dir="mot_gt", type="gt", exist_ok=True)
documentation
Check MMDetection
+ SAHI
demo:
bug fixes
- fix demo notebooks (#136)
v0.4.5
v0.4.4
v0.4.3
refactorize postprocess (#109)
- specify postprocess type as
--postprocess_type UNIONMERGE
or--postprocess_type NMS
to be applied over sliced predictions - specify postprocess match metric as
--match_metric IOS
for intersection over smaller area or--match_metric IOU
for intersection over union - specify postprocess match threshold as
--match_thresh 0.5
- add
--class_agnostic
argument to ignore category ids of the predictions during postprocess (merging/nms)
export visuals with gt (#107)
- export visuals with predicted + gt annotations into
visuals_with_gt
folder whencoco_file_path
is provided - keep source folder structure when exporting results
- add
from_coco_annotation_dict
classmethod toObjectAnnotation
- remove unused imports/classes/parameters
- better typing hints