Skip to content

Latest commit

 

History

History
22 lines (17 loc) · 1.83 KB

ScanNet.md

File metadata and controls

22 lines (17 loc) · 1.83 KB

ScanNet Instructions

I personally found it a bit tricky to setup the ScanNet dataset the first time I tried it. So, I am compiling some notes/instructions on how to do it in case someone finds it useful.

1. Dataset download

To download ScanNet data and its labels, follow the instructions here. Basically, fill out the ScanNet Terms of Use agreement and email it to [email protected]. You will receive a download link to the dataset. Download the dataset and unzip it.

2. Use SensReader to extract RGB-D and camera data

Use the reader.py script as follows for each scene you want to work with:

python reader.py --filename [.sens file to export data from] --output_path [output directory to export data to] 
	  Options:
	  --export_depth_images: export all depth frames as 16-bit pngs (depth shift 1000)
	  --export_color_images: export all color frames as 8-bit rgb jpgs
	  --export_poses: export all camera poses (4x4 matrix, camera to world)
	  --export_intrinsics: export camera intrinsics (4x4 matrix) 

3. Then, use this script to convert the data to NeRF-style format. For instructions, see Step 1 here.

  1. The transformation matrices (c2w) in the generated transforms_xxx.json will be in SLAM / OpenCV format (xyz -> right down forward). You need to change to NDC format (xyz -> right up back) in the dataloader for training with NeRF convention.
  2. For example, see the conversion done here.