This project encorporates
- Keypoint detectors, descriptors, and methods to match them between successive images
- Detecting objects in an image using deep learning via YOLOv3.
- Mapping camera image data with Lidar points in 3D space to identify 3D objects.
- Match 3D objects over time by using keypoint correspondences.
- Compute a time-to-collision (TTC) with objects based on Lidar measurements.
- Compute a time-to-collision (TTC) with objects based on Camera measurements.
- Tests different combinations of detectors and descrtiptors to find most suitable configuration.
The overall project schematic is shown below based on Udacity SFND.
- cmake >= 2.8
- All OSes: click here for installation instructions
- make >= 4.1 (Linux, Mac), 3.81 (Windows)
- Linux: make is installed by default on most Linux distros
- Mac: install Xcode command line tools to get make
- Windows: Click here for installation instructions
- OpenCV >= 4.1
- This must be compiled from source using the
-D OPENCV_ENABLE_NONFREE=ON
cmake flag for testing the SIFT and SURF detectors. - The OpenCV 4.1.0 source code can be found here
- This must be compiled from source using the
- gcc/g++ >= 5.4
- Linux: gcc / g++ is installed by default on most Linux distros
- Mac: same deal as make - install Xcode command line tools
- Windows: recommend using MinGW
- Clone this repo.
- Download missing yolov3.weights file from https://pjreddie.com/media/files/yolov3.weights and copy to dat/yolo/ folder.
- Make a build directory in the top level project directory:
mkdir build && cd build
- Compile:
cmake .. && make
- Run it:
./3D_object_tracking
.# Camera_Lidar_3D_Object_Tracking