Ready-to-use models for a range of computer vision tasks like detection, classification, and more. With ONNX support, you get fast and accurate results right out of the box.
Easily integrate these models into your apps for real-time processing—ideal for edge devices, cloud setups, or production environments. In one line of code, you can have powerful model inference running!
from open_image_models import LicensePlateDetector
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-256-license-plate-end2end")
lp_detector.predict("path/to/license_plate_image.jpg")
✨ That's it! Powerful license plate detection with just a few lines of code.
- 🚀 Pre-trained: Models are ready for immediate use, no additional training required.
- 🌟 ONNX: Cross-platform support for fast inference on both CPU and GPU environments.
- ⚡ Performance: Optimized for both speed and accuracy, ensuring efficient real-time applications.
- 💻 Simple API: Power up your applications with robust model inference in just one line of code.
Model | Image Size | Precision (P) | Recall (R) | mAP50 | mAP50-95 |
---|---|---|---|---|---|
yolo-v9-s-608-license-plate-end2end |
608 | 0.957 | 0.917 | 0.966 | 0.772 |
yolo-v9-t-640-license-plate-end2end |
640 | 0.966 | 0.896 | 0.958 | 0.758 |
yolo-v9-t-512-license-plate-end2end |
512 | 0.955 | 0.901 | 0.948 | 0.724 |
yolo-v9-t-416-license-plate-end2end |
416 | 0.94 | 0.894 | 0.94 | 0.702 |
yolo-v9-t-384-license-plate-end2end |
384 | 0.942 | 0.863 | 0.92 | 0.687 |
yolo-v9-t-256-license-plate-end2end |
256 | 0.937 | 0.797 | 0.858 | 0.606 |
Usage
import cv2
from rich import print
from open_image_models import LicensePlateDetector
# Initialize the License Plate Detector with the pre-trained YOLOv9 model
lp_detector = LicensePlateDetector(detection_model="yolo-v9-t-384-license-plate-end2end")
# Load an image
image_path = "path/to/license_plate_image.jpg"
image = cv2.imread(image_path)
# Perform license plate detection
detections = lp_detector.predict(image)
print(detections)
# Benchmark the model performance
lp_detector.show_benchmark(num_runs=1000)
# Display predictions on the image
annotated_image = lp_detector.display_predictions(image)
# Show the annotated image
cv2.imshow("Annotated Image", annotated_image)
cv2.waitKey(0)
cv2.destroyAllWindows()
Tip
Checkout the docs!
To install open-image-models via pip, use the following command:
pip install open-image-models
Contributions to the repo are greatly appreciated. Whether it's bug fixes, feature enhancements, or new models, your contributions are warmly welcomed.
To start contributing or to begin development, you can follow these steps:
- Clone repo
git clone https://github.com/ankandrew/open-image-models.git
- Install all dependencies using Poetry:
poetry install --all-extras
- To ensure your changes pass linting and tests before submitting a PR:
make checks
@article{wang2024yolov9,
title={{YOLOv9}: Learning What You Want to Learn Using Programmable Gradient Information},
author={Wang, Chien-Yao and Liao, Hong-Yuan Mark},
booktitle={arXiv preprint arXiv:2402.13616},
year={2024}
}