You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi, many thanks for your wonderful works and efforts. I'm trying to adapt some module in monosdf into my work, but I find some problems in producing appropriate data format for monosdf.
As you explained for preprocess/scannet_to_monosdf.py in #75 : 2 stands for -1 to 1 and +3 because we assume each camera sees 1.5 meters forward in scannet. scale = 2. / (np.max(max_vertices - min_vertices) + 3.)
Could you explain more about the magic number 3. ? why should we take depth sensor detection ranges into consideration? And why this extra "3." varies from different dataset processing, such as Replica#69 and DTU #21
@niujinshuchong I really appreciate your efforts to solve our question!
The text was updated successfully, but these errors were encountered:
I encountered the same issue. The results are excellent when training with the dataset’s provided trajectories, but when using trajectories generated by COLMAP, the results are poor. I attempted to normalize COLMAP’s trajectories along the primary axis to the range [-1, 1] while proportionally scaling the other two axes, but the effect is still unsatisfactory. Do you have any recommendations for integrating COLMAP? Additionally, how should parameters similar to '3' be set during the transformation?
Hi, many thanks for your wonderful works and efforts. I'm trying to adapt some module in monosdf into my work, but I find some problems in producing appropriate data format for monosdf.
As you explained for preprocess/scannet_to_monosdf.py in #75 : 2 stands for -1 to 1 and +3 because we assume each camera sees 1.5 meters forward in scannet.
scale = 2. / (np.max(max_vertices - min_vertices) + 3.)
Could you explain more about the magic number 3. ? why should we take depth sensor detection ranges into consideration? And why this extra "3." varies from different dataset processing, such as Replica#69 and DTU #21
@niujinshuchong I really appreciate your efforts to solve our question!
The text was updated successfully, but these errors were encountered: