You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to know, please, if you think the idea of adding a branch for simple depth prediction after 2d feature extraction, and then generating point cloud information from the depth as a proposal when projecting 2d features from the original network to the 3d voxels is feasible?
The text was updated successfully, but these errors were encountered:
I think, this idea is called pseudo-lidar and is very well explored on outdoor datasets e.g. KITTI. You can also find some works with this keyword on SUN RGB-D, but I don't have much expertise here.
I would like to know, please, if you think the idea of adding a branch for simple depth prediction after 2d feature extraction, and then generating point cloud information from the depth as a proposal when projecting 2d features from the original network to the 3d voxels is feasible?
The text was updated successfully, but these errors were encountered: