We learn a joint embedding between cluttered, incomplete 3D scanned objects and CAD models, where semantically similar objects of both domains lie close together regardless of the presence of low-level geometric differences. Such an embedding space enables for instance efficient nearest-neighbor cross-domain retrieval.
For the task of CAD model retrieval other approaches often evaluate the performance on a class category level, i.e., comparing the class category of the query to those of the results. However, we wish to evaluate the retrieval performance on a more fine-grained level than class categories. To this end, we propose our new Scan-CAD Object Similarity dataset.
For more information, please have a look at our ICCV 2019 paper.
We present the Scan-CAD Object Similarity dataset consisting of 5102
ranked scan-CAD similarity annotations.
To construct this dataset we use ScanNet, Scan2CAD and ShapeNetCore.v2. We extract the regions of scanned objects from ScanNet scenes using its semantic segmentations and bring the object into a canonical pose using the correspondences from Scan2CAD.
To build a ranked list of similar CAD models, we developed an annotation interface where the annotator sees 6 proposed CAD models, which are similar to a specific scan query, selects up to 3 most similar CAD models from this pool and ranks them.
If you would like to get access to the Scan-CAD Object Similarity dataset, please fill out this google form, and once accepted, we will send out the download link.
This code was developed with Python 3.7
and PyTorch 1.3
.
src/train.py
contains the main train loop.src/test.py
contains the code for evaluating the metrics, i.e. domain confusion, retrieval accuracy and ranking quality.
You can view our YouTube video here.
If you use the data or code please cite:
@inproceedings{dahnert2019embedding,
title={Joint Embedding of 3D Scan and CAD Objects},
author={Dahnert, Manuel and Dai, Angela and Guibas, Leonidas and Nie{\ss}ner, Matthias},
booktitle={The IEEE International Conference on Computer Vision (ICCV)},
year={2019}
}
If you have any questions, please contact us at [email protected].