generated from eliahuhorwitz/Academic-project-page-template
-
Notifications
You must be signed in to change notification settings - Fork 0
Commit
This commit does not belong to any branch on this repository, and may belong to a fork outside of the repository.
- Loading branch information
1 parent
4f8ccb5
commit b42c1e6
Showing
36 changed files
with
1,231 additions
and
46 deletions.
There are no files selected for viewing
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -1,6 +1,110 @@ | ||
# InNeRF360 | ||
|
||
|
||
> 'Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields' (CVPR 2024) | ||
### [Project](https://ivrl.github.io/InNeRF360/) | [arXiv](https://arxiv.org/abs/2305.15094) | ||
|
||
Dongqing Wang, Tong Zhang, Alaa Abboud, Sabine Süsstrunk | ||
|
||
[![DOI](https://zenodo.org/badge/777345232.svg)](https://zenodo.org/doi/10.5281/zenodo.11094775) | ||
|
||
TODO: Release code. | ||
In the meantime, in case of questions, don't hestitate to send a messsage to [email protected] or open an issue. | ||
![Figure Abstract](/docs/static/images/banner.png) | ||
|
||
>**Abstract:** We propose InNeRF360, an automatic system that accurately removes text-specified objects from 360-degree Neural Radiance Fields (NeRF). The challenge is to effectively remove objects while inpainting perceptually consistent content for the missing regions, which is particularly demanding for existing NeRF models due to their implicit volumetric representation. Moreover, unbounded scenes are more prone to floater artifacts in the inpainted region than frontal-facing scenes, as the change of object appearance and background across views is more sensitive to inaccurate segmentations and inconsistent inpainting. With a trained NeRF and a text description, our method efficiently removes specified objects and inpaints visually consistent content without artifacts. We apply depth-space warping to enforce consistency across multiview text-encoded segmentations, and then refine the inpainted NeRF model using perceptual priors and 3D diffusion-based geometric priors to ensure visual plausibility. Through extensive experiments in segmentation and inpainting on 360-degree and frontal-facing NeRFs, we show that our approach is effective and enhances NeRF's editability. | ||
|
||
If you find this project useful for your research, please cite: | ||
|
||
``` | ||
@InProceedings{wang2024innerf360, | ||
author={Wang, Dongqing and Zhang, Tong and Abboud, Alaa and S{\"u}sstrunk, Sabine}, | ||
title = {{InNeRF360: Text-Guided 3D-Consistent Object Inpainting on 360-degree Neural Radiance Fields}}, | ||
booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR)}, | ||
month = {June}, | ||
year = {2024} | ||
} | ||
``` | ||
|
||
|
||
## Installation | ||
|
||
1. Setup conda | ||
|
||
```bash | ||
conda create --name innerf360 -y python=3.9 | ||
conda activate innerf360 | ||
python -m pip install --upgrade pip | ||
``` | ||
|
||
2. Install [Nerfstudio and dependencies](https://docs.nerf.studio/en/latest/quickstart/installation.html). | ||
|
||
```bash | ||
cd nerfstudio | ||
pip install -e . | ||
pip install torch==1.13.1 torchvision functorch --extra-index-url https://download.pytorch.org/whl/cu117 | ||
pip install git+https://github.com/NVlabs/tiny-cuda-nn/#subdirectory=bindings/torch | ||
``` | ||
|
||
3. Install binvox | ||
|
||
```bash | ||
mkdir bins | ||
cd bins | ||
wget -O binvox https://www.patrickmin.com/binvox/linux64/binvox?rnd=16811490753710 | ||
cd ../ | ||
chmod +x bins/binvox | ||
``` | ||
|
||
4. Install InNeRF360 | ||
|
||
```bash | ||
cd ../ | ||
git clone https://github.com/IVRL/InNeRF360.git | ||
cd InNeRF360 | ||
``` | ||
|
||
|
||
## Project Organization | ||
|
||
``` | ||
root | ||
├── nerfstudio # installation of NeRFStudio | ||
└── InNeRF360 # github repo fils. Model details to be released! | ||
├── data | ||
└── ShapeNetCore.v2 # ShapeNet data | ||
├── data_modules | ||
... | ||
└── bins | ||
└── binvox # binvox to voxelize cubes | ||
└── docs # website | ||
``` | ||
|
||
## 3D diffusion model | ||
|
||
Our geometric prior is trained on sampling cubes from ShapeNet meshes. To download the ShapeNet dataset, login or create an account at https://shapenet.org and then download the [ShapeNetCore.v2](https://shapenet.cs.stanford.edu/shapenet/obj-zip/ShapeNetCore.v2.zip) dataset. | ||
|
||
We will release training details soon, in the meantime, we provide a trained [pretrained checkpoint](https://drive.google.com/file/d/1krt7xg0fujtRX_pXafQ8fDWFK9BoaeHq/view?usp=sharing) for our model. | ||
|
||
## Dataset Preparation | ||
|
||
We perform our experiments on NeRFStudio datasets, which can be downloaded from the [website](https://docs.nerf.studio/quickstart/existing_dataset.html) as follows: | ||
|
||
```bash | ||
ns-download-data nerfstudio --capture-name nerfstudio-dataset | ||
``` | ||
|
||
## Inference | ||
|
||
Our code is build upon NeRFbusters from a specific branch of the original NeRFStudio model. It is thus not feasible to use the default ``ns-viewer`` to inspect the trained results. | ||
|
||
For inspection and comparison purposes, we retrain our checkpoints to be compatible with default viewer, which can be found [here](https://drive.google.com/drive/folders/1UBS4qLOW1Cv70Lsvt9ZYL2Z7i-Oell5g?usp=sharing). | ||
|
||
## (TODO) Training | ||
|
||
## Acknowledgement | ||
|
||
This project is based on [Nerfstudio](https://docs.nerf.studio/) and [Nerfbusters](https://github.com/ethanweber/nerfbusters). We thank the incredible codebases of our brilliant prior works. | ||
|
||
|
||
|
Empty file.
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Original file line number | Diff line number | Diff line change |
---|---|---|
@@ -0,0 +1,271 @@ | ||
# Copyright (C) 2012 Daniel Maturana | ||
# This file is part of binvox-rw-py. | ||
# | ||
# binvox-rw-py is free software: you can redistribute it and/or modify | ||
# it under the terms of the GNU General Public License as published by | ||
# the Free Software Foundation, either version 3 of the License, or | ||
# (at your option) any later version. | ||
# | ||
# binvox-rw-py is distributed in the hope that it will be useful, | ||
# but WITHOUT ANY WARRANTY; without even the implied warranty of | ||
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the | ||
# GNU General Public License for more details. | ||
# | ||
# You should have received a copy of the GNU General Public License | ||
# along with binvox-rw-py. If not, see <http://www.gnu.org/licenses/>. | ||
# | ||
|
||
# File is originally copied from | ||
# https://github.com/shubham-goel/ds/blob/main/src/utils/binvox_rw.py | ||
# which may include changes from the original codebase. | ||
|
||
""" | ||
Binvox to Numpy and back. | ||
>>> import numpy as np | ||
>>> import binvox_rw | ||
>>> with open('chair.binvox', 'rb') as f: | ||
... m1 = binvox_rw.read_as_3d_array(f) | ||
... | ||
>>> m1.dims | ||
[32, 32, 32] | ||
>>> m1.scale | ||
41.133000000000003 | ||
>>> m1.translate | ||
[0.0, 0.0, 0.0] | ||
>>> with open('chair_out.binvox', 'wb') as f: | ||
... m1.write(f) | ||
... | ||
>>> with open('chair_out.binvox', 'rb') as f: | ||
... m2 = binvox_rw.read_as_3d_array(f) | ||
... | ||
>>> m1.dims==m2.dims | ||
True | ||
>>> m1.scale==m2.scale | ||
True | ||
>>> m1.translate==m2.translate | ||
True | ||
>>> np.all(m1.data==m2.data) | ||
True | ||
>>> with open('chair.binvox', 'rb') as f: | ||
... md = binvox_rw.read_as_3d_array(f) | ||
... | ||
>>> with open('chair.binvox', 'rb') as f: | ||
... ms = binvox_rw.read_as_coord_array(f) | ||
... | ||
>>> data_ds = binvox_rw.dense_to_sparse(md.data) | ||
>>> data_sd = binvox_rw.sparse_to_dense(ms.data, 32) | ||
>>> np.all(data_sd==md.data) | ||
True | ||
>>> # the ordering of elements returned by numpy.nonzero changes with axis | ||
>>> # ordering, so to compare for equality we first lexically sort the voxels. | ||
>>> np.all(ms.data[:, np.lexsort(ms.data)] == data_ds[:, np.lexsort(data_ds)]) | ||
True | ||
""" | ||
|
||
import numpy as np | ||
|
||
|
||
class Voxels(object): | ||
""" Holds a binvox model. | ||
data is either a three-dimensional numpy boolean array (dense representation) | ||
or a two-dimensional numpy float array (coordinate representation). | ||
dims, translate and scale are the model metadata. | ||
dims are the voxel dimensions, e.g. [32, 32, 32] for a 32x32x32 model. | ||
scale and translate relate the voxels to the original model coordinates. | ||
To translate voxel coordinates i, j, k to original coordinates x, y, z: | ||
x_n = (i+.5)/dims[0] | ||
y_n = (j+.5)/dims[1] | ||
z_n = (k+.5)/dims[2] | ||
x = scale*x_n + translate[0] | ||
y = scale*y_n + translate[1] | ||
z = scale*z_n + translate[2] | ||
""" | ||
|
||
def __init__(self, data, dims, translate, scale, axis_order): | ||
self.data = data | ||
self.dims = dims | ||
self.translate = translate | ||
self.scale = scale | ||
assert (axis_order in ('xzy', 'xyz')) | ||
self.axis_order = axis_order | ||
|
||
def clone(self): | ||
data = self.data.copy() | ||
dims = self.dims[:] | ||
translate = self.translate[:] | ||
return Voxels(data, dims, translate, self.scale, self.axis_order) | ||
|
||
def write(self, fp): | ||
write(self, fp) | ||
|
||
def read_header(fp): | ||
""" Read binvox header. Mostly meant for internal use. | ||
""" | ||
line = fp.readline().strip() | ||
if not line.startswith(b'#binvox'): | ||
raise IOError('Not a binvox file') | ||
dims = list(map(int, fp.readline().strip().split(b' ')[1:])) | ||
translate = list(map(float, fp.readline().strip().split(b' ')[1:])) | ||
scale = list(map(float, fp.readline().strip().split(b' ')[1:]))[0] | ||
line = fp.readline() | ||
return dims, translate, scale | ||
|
||
def read_as_3d_array(fp, fix_coords=True): | ||
""" Read binary binvox format as array. | ||
Returns the model with accompanying metadata. | ||
Voxels are stored in a three-dimensional numpy array, which is simple and | ||
direct, but may use a lot of memory for large models. (Storage requirements | ||
are 8*(d^3) bytes, where d is the dimensions of the binvox model. Numpy | ||
boolean arrays use a byte per element). | ||
Doesn't do any checks on input except for the '#binvox' line. | ||
""" | ||
dims, translate, scale = read_header(fp) | ||
raw_data = np.frombuffer(fp.read(), dtype=np.uint8) | ||
# if just using reshape() on the raw data: | ||
# indexing the array as array[i,j,k], the indices map into the | ||
# coords as: | ||
# i -> x | ||
# j -> z | ||
# k -> y | ||
# if fix_coords is true, then data is rearranged so that | ||
# mapping is | ||
# i -> x | ||
# j -> y | ||
# k -> z | ||
values, counts = raw_data[::2], raw_data[1::2] | ||
data = np.repeat(values, counts).astype(bool) | ||
data = data.reshape(dims) | ||
if fix_coords: | ||
# xzy to xyz TODO the right thing | ||
data = np.transpose(data, (0, 2, 1)) | ||
axis_order = 'xyz' | ||
else: | ||
axis_order = 'xzy' | ||
return Voxels(data, dims, translate, scale, axis_order) | ||
|
||
def read_as_coord_array(fp, fix_coords=True): | ||
""" Read binary binvox format as coordinates. | ||
Returns binvox model with voxels in a "coordinate" representation, i.e. an | ||
3 x N array where N is the number of nonzero voxels. Each column | ||
corresponds to a nonzero voxel and the 3 rows are the (x, z, y) coordinates | ||
of the voxel. (The odd ordering is due to the way binvox format lays out | ||
data). Note that coordinates refer to the binvox voxels, without any | ||
scaling or translation. | ||
Use this to save memory if your model is very sparse (mostly empty). | ||
Doesn't do any checks on input except for the '#binvox' line. | ||
""" | ||
dims, translate, scale = read_header(fp) | ||
raw_data = np.frombuffer(fp.read(), dtype=np.uint8) | ||
|
||
values, counts = raw_data[::2], raw_data[1::2] | ||
|
||
sz = np.prod(dims) | ||
index, end_index = 0, 0 | ||
end_indices = np.cumsum(counts) | ||
indices = np.concatenate(([0], end_indices[:-1])).astype(end_indices.dtype) | ||
|
||
values = values.astype(bool) | ||
indices = indices[values] | ||
end_indices = end_indices[values] | ||
|
||
nz_voxels = [] | ||
for index, end_index in zip(indices, end_indices): | ||
nz_voxels.extend(range(index, end_index)) | ||
nz_voxels = np.array(nz_voxels) | ||
# TODO are these dims correct? | ||
# according to docs, | ||
# index = x * wxh + z * width + y; // wxh = width * height = d * d | ||
|
||
x = nz_voxels / (dims[0]*dims[1]) | ||
zwpy = nz_voxels % (dims[0]*dims[1]) # z*w + y | ||
z = zwpy / dims[0] | ||
y = zwpy % dims[0] | ||
if fix_coords: | ||
data = np.vstack((x, y, z)) | ||
axis_order = 'xyz' | ||
else: | ||
data = np.vstack((x, z, y)) | ||
axis_order = 'xzy' | ||
|
||
#return Voxels(data, dims, translate, scale, axis_order) | ||
return Voxels(np.ascontiguousarray(data), dims, translate, scale, axis_order) | ||
|
||
def dense_to_sparse(voxel_data, dtype=np.int64): | ||
""" From dense representation to sparse (coordinate) representation. | ||
No coordinate reordering. | ||
""" | ||
if voxel_data.ndim!=3: | ||
raise ValueError('voxel_data is wrong shape; should be 3D array.') | ||
return np.asarray(np.nonzero(voxel_data), dtype) | ||
|
||
def sparse_to_dense(voxel_data, dims, dtype=bool): | ||
if voxel_data.ndim!=2 or voxel_data.shape[0]!=3: | ||
raise ValueError('voxel_data is wrong shape; should be 3xN array.') | ||
if np.isscalar(dims): | ||
dims = [dims]*3 | ||
dims = np.atleast_2d(dims).T | ||
# truncate to integers | ||
xyz = voxel_data.astype(np.int64) | ||
# discard voxels that fall outside dims | ||
valid_ix = ~np.any((xyz < 0) | (xyz >= dims), 0) | ||
xyz = xyz[:,valid_ix] | ||
out = np.zeros(dims.flatten(), dtype=dtype) | ||
out[tuple(xyz)] = True | ||
return out | ||
|
||
#def get_linear_index(x, y, z, dims): | ||
#""" Assuming xzy order. (y increasing fastest. | ||
#TODO ensure this is right when dims are not all same | ||
#""" | ||
#return x*(dims[1]*dims[2]) + z*dims[1] + y | ||
|
||
def write(voxel_model, fp): | ||
""" Write binary binvox format. | ||
Note that when saving a model in sparse (coordinate) format, it is first | ||
converted to dense format. | ||
Doesn't check if the model is 'sane'. | ||
""" | ||
if voxel_model.data.ndim==2: | ||
# TODO avoid conversion to dense | ||
dense_voxel_data = sparse_to_dense(voxel_model.data, voxel_model.dims) | ||
else: | ||
dense_voxel_data = voxel_model.data | ||
|
||
fp.write('#binvox 1\n') | ||
fp.write('dim '+' '.join(map(str, voxel_model.dims))+'\n') | ||
fp.write('translate '+' '.join(map(str, voxel_model.translate))+'\n') | ||
fp.write('scale '+str(voxel_model.scale)+'\n') | ||
fp.write('data\n') | ||
if not voxel_model.axis_order in ('xzy', 'xyz'): | ||
raise ValueError('Unsupported voxel model axis order') | ||
|
||
if voxel_model.axis_order=='xzy': | ||
voxels_flat = dense_voxel_data.flatten() | ||
elif voxel_model.axis_order=='xyz': | ||
voxels_flat = np.transpose(dense_voxel_data, (0, 2, 1)).flatten() | ||
|
||
# keep a sort of state machine for writing run length encoding | ||
state = voxels_flat[0] | ||
ctr = 0 | ||
for c in voxels_flat: | ||
if c==state: | ||
ctr += 1 | ||
# if ctr hits max, dump | ||
if ctr==255: | ||
fp.write(chr(state)) | ||
fp.write(chr(ctr)) | ||
ctr = 0 | ||
else: | ||
# if switch state, dump | ||
fp.write(chr(state)) | ||
fp.write(chr(ctr)) | ||
state = c | ||
ctr = 1 | ||
# flush out remainders | ||
if ctr > 0: | ||
fp.write(chr(state)) | ||
fp.write(chr(ctr)) | ||
|
||
if __name__ == '__main__': | ||
import doctest | ||
doctest.testmod() |
Oops, something went wrong.