Updated March 11th, 2020
Table of Contents
In the sparse-to-dense depth completion problem, one wants to infer the dense depth map of a 3-D scene given an RGB image and its corresponding sparse reconstruction in the form of a sparse depth map obtained either from computational methods such as SfM (Strcuture-from-Motion) or active sensors such as lidar or structured light sensors.
Example 1
Input RGB image from the VOID dataset | Densified depth map -- colored and back-projected to 3-D |
---|---|
![]() |
![]() |
Example 2
Input RGB image from the KITTI dataset | Densified depth map -- colored and back-projected to 3-D |
---|---|
![]() |
![]() |
Here we compile both unsupervised/self-supervised (monocular and stereo) and supervised methods published in recent conferences and journals on the VOID (Wong et. al., 2020) and KITTI (Uhrig et. al., 2017) depth completion benchmarks. Our ranking considers all four metrics rather than just RMSE.
Quick Links
Paper | Publication | Code | MAE | RMSE | iMAE | iRMSE |
---|---|---|---|---|---|---|
Unsupervised Depth Completion from Visual Inertial Odometry | RA-L & ICRA 2020 | Tensorflow | 85.05 | 169.79 | 48.92 | 104.02 |
Paper | Publication | Code | MAE | RMSE | iMAE | iRMSE |
---|---|---|---|---|---|---|
Unsupervised Depth Completion from Visual Inertial Odometry | RA-L & ICRA 2020 | Tensorflow | 299.41 | 1169.97 | 1.20 | 3.56 |
Dense depth posterior (ddp) from single image and sparse range | CVPR 2019 | Tensorflow | 343.46 | 1263.19 | 1.32 | 3.58 |
DFuseNet: Deep Fusion of RGB and Sparse Depth Information for Image Guided Dense Depth Completion | ITSC 2019 | PyTorch | 429.93 | 1206.66 | 1.79 | 3.62 |
In Defense of Classical Image Processing: Fast Depth Completion on the CPU | CRV 2018 | Python | 302.60 | 1288.46 | 1.29 | 3.78 |
Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera | ICRA 2019 | PyTorch | 350.32 | 1299.85 | 1.57 | 4.07 |
Semantically Guided Depth Upsampling | GCPR 2016 | N/A | 605.47 | 2312.57 | 2.05 | 7.38 |
Paper | Publication | Code | MAE | RMSE | iMAE | iRMSE |
---|---|---|---|---|---|---|
CSPN++: Learning Context and Resource Aware Convolutional Spatial Propagation Networks for Depth Completion | AAAI 2020 | N/A | 209.28 | 743.69 | 0.90 | 2.07 |
Dense depth posterior (ddp) from single image and sparse range | CVPR 2019 | Tensorflow | 203.96 | 832.94 | 0.85 | 2.10 |
Sparse and noisy LiDAR completion with RGB guidance and uncertainty | MVA 2019 | PyTorch | 215.02 | 772.87 | 0.93 | 2.19 |
A Multi-Scale Guided Cascade Hourglass Network for Depth Completion | WACV 2020 | N/A | 220.41 | 762.19 | 0.98 | 2.30 |
Learning Joint 2D-3D Representations for Depth Completion | ICCV 2019 | N/A | 221.19 | 752.88 | 1.14 | 2.34 |
DeepLiDAR: Deep Surface Normal Guided Depth Prediction for Outdoor Scene From Sparse LiDAR Data and Single Color Image | CVPR 2019 | PyTorch | 226.50 | 758.38 | 1.15 | 2.56 |
Depth Completion from Sparse LiDAR Data with Depth-Normal Constraints | ICCV 2019 | N/A | 235.17 | 777.05 | 1.13 | 2.42 |
Confidence propagation through cnns for guided sparse depth regression | PAMI 2019 | PyTorch | 233.26 | 829.98 | 1.03 | 2.60 |
Self-supervised Sparse-to-Dense: Self- supervised Depth Completion from LiDAR and Monocular Camera | ICRA 2019 | PyTorch | 249.95 | 814.73 | 1.21 | 2.80 |
Sparse and Dense Data with CNNs: Depth Completion and Semantic Segmentation | 3DV 2019 | N/A | 234.81 | 917.64 | 0.95 | 2.17 |
Depth coefficients for depth completion | CVPR 2019 | N/A | 252.21 | 988.38 | 1.13 | 2.87 |
Depth estimation via affinity learned with convolutional spatial propagation network | ECCV 2018 | N/A | 279.46 | 1019.64 | 1.15 | 2.93 |
Learning morphological operators for depth completion | ACIVS 2019 | N/A | 310.49 | 1045.45 | 1.57 | 3.84 |
Sparsity Invariant CNNs | 3DV 2017 | Tensorflow | 416.14 | 1419.75 | 1.29 | 3.25 |
Deep Convolutional Compressed Sensing for LiDAR Depth Completion | ACCV 2018 | Tensorflow | 439.48 | 1325.37 | 3.19 | 59.39 |