A robust and modular Computer Vision pipeline for multi-player detection, tracking, and re-identification in video footage, powered by YOLO object detection, handcrafted feature extraction, and real-time multi-object tracking.
- 🎯 YOLOv8-Based Detection — Leverages state-of-the-art YOLO architecture for high-precision person detection.
- 🔁 Custom Re-identification Engine — Feature-based identity recovery even after occlusion or frame exit.
- 📌 Multi-object Tracking — Maintains persistent identities with unique player IDs across frames.
- 📈 Online Learning Phase — Continuously updates appearance models to adapt to real-time visual variations.
- ♻️ Lost Player Recovery — Recovers previously tracked players using memory and similarity heuristics.
- 🎥 Annotated Video Output — Generates real-time visualizations with bounding boxes, IDs, and stats.
Python >= 3.8
pip install opencv-python numpy ultralytics
opencv-python
→ Video frame operationsnumpy
→ Feature vector computationsultralytics
→ YOLO detection integrationcollections
→ Python-native data handling
- CPU: 4-core minimum
- RAM: 8GB (16GB+ preferred)
- GPU: CUDA-enabled GPU for acceleration (Optional but optimal)
git clone https://github.com/your-repo/Player-Reid-Tracker.git
cd CVPROJECT
# Windows
venv\Scripts\activate
# Mac/Linux
source venv/bin/activate
pip install -r requirements.txt
# or manually
pip install opencv-python numpy ultralytics
python playersdetc.py
- YOLO detects bounding boxes per frame
- Applies confidence filtering (
conf_thresh=0.4
)
- Color Histogram (RGB analysis)
- Gradient Descriptors (Sobel filtering)
- L2 Normalization (for cosine distance consistency)
-
Initial tracking via appearance & motion similarity
-
Lost tracks matched against incoming detections using:
- Appearance Weight:
0.7
- Motion Weight:
0.3
- Appearance Weight:
-
Tracks are terminated after
30
missed frames or re-identified within300
frames.
Tweak the behavior of the pipeline inside the PlayerReidentificationTracker
class:
# Detection
self.conf_thresh = 0.4
self.max_distance = 150
self.max_age = 30
# Re-identification
self.reid_memory_frames = 300
self.learning_frames = 90
self.reid_threshold = 0.6
# Features
self.feature_dim = 512
self.appearance_weight = 0.7
self.motion_weight = 0.3
Type | File | Description |
---|---|---|
Input | 15sec_input_720p.mp4 |
Raw video footage (720p recommended) |
Input | best.pt |
YOLOv8 model weights |
Output | output_reidentification.mp4 |
Final video with tracking overlays |
- 🎯 Bounding Boxes with distinct player-specific colors
- 🔢 Player IDs (P1, P2, …)
- 📊 Confidence Scores
- 📡 Learning Phase Indicator
- 🧾 Live Stats: frame number, active/lost count, ETA
- ✅ Processed Frames Count
- 🔁 FPS (Frames Per Second)
- 👥 Total Players Detected
- ⏱️ Estimated Time Remaining
- 🔍 Lost vs Active Tracks
Issue | Resolution |
---|---|
Could not open video |
Ensure 15sec_input_720p.mp4 exists in root directory |
best.pt not loading |
Verify model weights are valid YOLOv8 format |
Slow or laggy performance | Lower video resolution / Use GPU acceleration |
venv activation fails |
Use platform-specific activate commands (see Setup) |
appearance_similarity = cosine(feature_vector_a, feature_vector_b)
motion_similarity = inverse_distance(bbox_center_a, bbox_center_b)
- Final matching score is a weighted average of the above.
- Active Tracks: On-screen players
- Lost Buffer: Recently lost tracks for re-ID
- Feature History: Cached appearance embeddings per player
- ✅ GPU-optimized re-ID model integration (e.g., Deep SORT + MobileNet)
- ✅ Configurable YAML/JSON for parameters
- ✅ UI Dashboard for real-time monitoring
- ✅ Dataset evaluation support for benchmarked results
- YOLO by Ultralytics
- OpenCV Python Docs
- Research on Person Re-ID and Multi-Object Tracking
- The system is auto-calibrated for short-length, high-resolution sports or surveillance videos.
- Re-identification is robust up to 300 frames of absence.
- The feature space is normalized and modular, ensuring scalability across different datasets or player uniforms.