Skip to content

viplix3/PointPillars-TF

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Codebase Information

The base code has been taken from tyagi-iiitv/PointPillars GitHub repository. As I found some bugs in the original repository, and it was not being actively maintained, I decided to create an alternate repository.

Major changes in this repo compared to the original one:

  • Correct transformation of the KITTI GT from camera coordinate frame to LiDAR corodinate frame.
  • Minor changes during target creation.
  • Slight changes in the model convolution setup (conv-bn-relu instead of the original conv(with_bias)-relu-bn).
  • Complete overhaul of inference pipeline with functionality of dumping 3D BBs projected on the image and dumping of labels in the KITTI evaluation toolkit expected format.
  • Unit-tests for checking code functionality.

Please note that I have not been able to achieve the same performance as claimed in the paper.

About Point Pillars

Point Pillars is a very famous Deep Neural Network for 3D Object Detection for LiDAR point clouds. With the application of object detection on the LiDAR devices fitted in the self driving cars, Point Pillars focuse on fast inference ~50fps, which was magnitudes above as compared to other networks for 3D Object detection. In this repo, we are trying to develop point pillars in TensorFlow. Here's a good first post to familiarize yourself with Point Pillars.

Contributors are welcome to work on open issues and submit PRs. First time contributors are welcome and can pick up any "Good First Issues" to work on.

PointPillars in TensorFlow

Point PIllars 3D detection network implementation in Tensorflow. External contributions are welcome, please fork this repo and see the issues for possible improvements in the code.

Installation

Download the LiDAR, Calibration and Label_2 zip files from the Kitti dataset link and unzip the files, giving the following directory structure:

├── training    <-- 7481 train data
   |   ├── calib
   |   ├── label_2
   |   ├── velodyne
└── testing     <-- 7580 test data
           ├── calib
           ├── velodyne

After placing the Kitti dataset in the root directory, run the following code

git clone --recurse-submodules https://github.com/viplix3/PointPillars-TF.git
conda env create -f PointPillarsDevEnv.yml
conda activate PointPillarsDevEnv
python setup.py install

Deploy on a cloud notebook instance (Amazon SageMaker etc.)

Please read this blog article: https://link.medium.com/TVNzx03En8

Technical details about this code

Please refer to this article on Medium.

Pretrained Model

The Pretrained Point Pillars for Kitti with complete training and validation logs can be accessed with this link. Use the file model.h5.

Saving the model as .pb

Inside the point_pillars_training_run.py file, change the code as follows to save the model in .pb format.

import sys
if __name__ == "__main__":

    params = Parameters()

    pillar_net = build_point_pillar_graph(params)
    pillar_net.load_weights(os.path.join(MODEL_ROOT, "model.h5"))
    pillar_net.save('new_model')
    sys.exit()
    # This saves the model as pb in the new_model directory. 
    # Remove these lines during usual training. 

Loading the saved pb model

model = tf.saved_model.load('model_directory')

About

PointPillars implementation using TensorFlow.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published