-
Notifications
You must be signed in to change notification settings - Fork 171
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
Showing
10 changed files
with
168 additions
and
53 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -21,4 +21,5 @@ __pycache__ | |
*.fig | ||
log | ||
ae_output | ||
pytorch/projects/logs | ||
pytorch/projects/logs | ||
pytorch/projects/data |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
|
@@ -2,8 +2,7 @@ | |
|
||
<!-- ## Introduction <a name="introduction"></a> --> | ||
|
||
This repository contains the implementation of *O-CNN* and *Adaptive O-CNN* | ||
introduced in our SIGGRAPH 2017 paper and SIGGRAPH Asia 2018 paper. | ||
This repository contains the implementation of our papers related with *O-CNN*. | ||
The code is released under the **MIT license**. | ||
|
||
- **[O-CNN: Octree-based Convolutional Neural Networks](https://wang-ps.github.io/O-CNN.html)**<br/> | ||
|
@@ -31,8 +30,30 @@ AAAI Conference on Artificial Intelligence (AAAI), 2021. [Arxiv, 2020.08]<br/> | |
If you use our code or models, please [cite](docs/citation.md) our paper. | ||
|
||
|
||
|
||
### Contents | ||
- [Installation](docs/installation.md) | ||
- [Data Preparation](docs/data_preparation.md) | ||
- [Shape Classification](docs/classification.md) | ||
- [Shape Retrieval](docs/retrieval.md) | ||
- [Shape Segmentation](docs/segmentation.md) | ||
- [Shape Autoencoder](docs/autoencoder.md) | ||
- [Shape Completion](docs/completion.md) | ||
- [Image2Shape](docs/image2shape.md) | ||
- [Unsupverised Pretraining](docs/unsupervised.md) | ||
- [ScanNet Segmentation](docs/scannet.md) | ||
|
||
|
||
|
||
|
||
### What's New? | ||
- 2021.03.01: Update the code for pytorch-based O-CNN, including a ResNet and some important modules. | ||
- 2021.08.24: Update the code for pythorch-based O-CNN, including a UNet and | ||
some other major components. Our vanilla implementation without any tricks on | ||
[ScanNet](docs/scannet.md) dataset achieves 76.2 mIoU on the | ||
[ScanNet benchmark](http://kaldir.vc.in.tum.de/scannet_benchmark/), even surpassing the | ||
recent state-of-art approaches published in CVPR 2021 and ICCV 2021. | ||
- 2021.03.01: Update the code for pytorch-based O-CNN, including a ResNet and | ||
some important modules. | ||
- 2021.02.08: Release the code for ShapeNet segmentation with HRNet. | ||
- 2021.02.03: Release the code for ModelNet40 classification with HRNet. | ||
- 2020.10.12: Release the initial version of our O-CNN under PyTorch. The code | ||
|
@@ -52,26 +73,6 @@ If you use our code or models, please [cite](docs/citation.md) our paper. | |
benchmarks. | ||
|
||
|
||
|
||
### Contents | ||
- [Installation](docs/installation.md) | ||
- [Data Preparation](docs/data_preparation.md) | ||
- [Shape Classification](docs/classification.md) | ||
- [Shape Retrieval](docs/retrieval.md) | ||
- [Shape Segmentation](docs/segmentation.md) | ||
- [Shape Autoencoder](docs/autoencoder.md) | ||
- [Shape Completion](docs/completion.md) | ||
- [Image2Shape](docs/image2shape.md) | ||
- [Unsupverised Pretraining](docs/unsupervised.md) | ||
|
||
|
||
|
||
|
||
We thank the authors of [ModelNet](http://modelnet.cs.princeton.edu), | ||
[ShapeNet](http://shapenet.cs.stanford.edu/shrec16/) and | ||
[Region annotation dataset](http://cs.stanford.edu/~ericyi/project_page/part_annotation/index.html) | ||
for sharing their 3D model datasets with the public. | ||
|
||
Please contact us (Pengshuai Wang [email protected], Yang Liu [email protected] ) | ||
Please contact us (Peng-Shuai Wang [email protected], Yang Liu [email protected] ) | ||
if you have any problems about our implementation. | ||
|
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,52 @@ | ||
# ScanNet Segmentation | ||
|
||
### Data preparation | ||
|
||
1. Download the data from the [ScanNet benchmark](http://kaldir.vc.in.tum.de/scannet_benchmark/). | ||
Unzip the data and place it to the folder `<scannet_folder>` | ||
|
||
2. Change the working directory to `pytorch/projects`, run the following command | ||
to prepare the dataset. | ||
```shell | ||
python tools/scannet.py --run process_scannet --path_in <scannet_folder> | ||
``` | ||
|
||
3. Download the training, validation and testing file lists via the following command. | ||
``` | ||
python tools/scannet.py --run download_filelists | ||
``` | ||
|
||
|
||
### Training and testing | ||
|
||
1. Run the following command to train the network with 4 GPUs. | ||
```shell | ||
python segmentation.py --config configs/seg_scannet.yaml SOLVER.gpu 0,1,2,3 | ||
``` | ||
The mIoU on the validation set is 74.0, the training log and weights can be | ||
downloaded from this [link](https://www.dropbox.com/s/3grwj7vwd802yzz/D9_2cm.zip?dl=0). | ||
|
||
2. To achieve the 76.2 mIoU on the testing set, we follow the practice described | ||
in the MinkowsiNet, i.e. training the network both on the training and | ||
validation set via the following command. | ||
```shell | ||
python segmentation.py --config configs/seg_scannet.yaml SOLVER.gpu 0,1,2,3 \ | ||
SOLVER.logdir logs/scannet/D9_2cm_all \ | ||
DATA.train.filelist data/scannet/scannetv2_train_val_new.txt | ||
``` | ||
The training log and weights can be downloaded from this [link](https://www.dropbox.com/s/szhjus6kmknxyya/D9_2cm_all.zip?dl=0). | ||
|
||
3. To generate the per-point predictions, run the following command. | ||
```shell | ||
python segmentation.py --config configs/seg_scannet_eval.yaml | ||
``` | ||
|
||
4. Run the following command to convert the predictions to segmentation labels. | ||
```shell | ||
python tools/scannet.py --path_in data/scannet/test \ | ||
--path_pred logs/scannet/D9_2cm_eval \ | ||
--path_out logs/scannet/D9_2cm_eval_seg \ | ||
--filelist data/scannet/scannetv2_test_new.txt \ | ||
--run generate_output_seg | ||
``` | ||
Then the generated segmentation results exist in the folder `logs/scannet/D9_2cm_eval_seg` |
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters