This repository contains the code for an autonomous robotic dog, designed to interact with its environment using object detection and distance estimation. The robot utilizes a camera to detect objects (specifically people) and calculate the distance to them, adjusting its movements accordingly.
- Object detection with a pre-trained ONNX model.
- Distance estimation using camera focal length and known object width.
- Autonomous movement control to approach or maintain a specific distance from detected people.
- Turning behavior when no objects are detected, enabling the robot to search its environment.
- OpenCV
- NumPy
- ONNX Runtime
- Spidev
- Xgoscreen
- Xgolib
- Pillow
- Threading
- Random
- Ensure the Xgo robot is connected and the camera is set up correctly.
- Run the controller script to start the robot's autonomous behavior.
- The robot will move towards detected people, maintaining a safe distance.
- If no people are detected, the robot will turn to search for them.
- Improve face detection accuracy.
- Optimize face detection performance for real-time processing.
- Refine the turning mechanism to be more responsive to object detection.
- Integrate additional sensors for enhanced environmental awareness.
- Implement machine learning algorithms for better object classification.
- In progress: Enhance the obstacle avoidance system.
This project is licensed under the MIT License - see the LICENSE file for details.
-
Special thanks to the contributors of the libraries and tools used in this project.
Keyvan Hardani - NOV.2023