-
Notifications
You must be signed in to change notification settings - Fork 3
4) Nerf Editing: Relighting, geometry extraction and scene segmentation
-
Shading/BRDF and light extraction
-
Relighting with 4D Incident Light Fields
- NeRFNeR: Neural Radiance Fields with Reflections
- NeRD: Neural Reflectance Decomposition from Image Collections, 2021
- Neural PIL, 2022
- NeRFactor: Neural Factorization of Shape and Reflectance Under an Unknown Illumination, 2021
- PhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting, 2021
- NVIDIA DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer, 2021
- DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting, 2023
- Global illumination, 2022
- Efficient and Differentiable Shadow Computation for Inverse Problems, 2022
- Other
- Comparison of methods for inverse scene lighting
-
Full scene de-compositing
- NVIDIA Nvdiffrast – Modular Primitives for High-Performance Differentiable Rendering, 2022
- SAMURAI: Shape And Material from Unconstrained Real-world Arbitrary Image collections, 2022
- NeAI: A Pre-convoluted Representation for Plug-and-Play Neural Ambient Illumination, 2023
- IntrinsicNeRF: Learning Intrinsic Neural Radiance Fields for Editable Novel View Synthesis, 2022
- 2022 nvdiffrec: Textured meshes form scenes
- Editing and In painting
- Segmentation of 3D Scenes
- Reference-guided Controllable Inpainting of Neural Radiance Fields, 2023
- Conversion to geometry
- Gap filling with diffusion, 2023
- nerf2mesh, 2023
- Neural Microfacet Fields for Inverse Rendering 2023
-
Relighting with 4D Incident Light Fields
Table of contents generated with markdown-toc
Over the past year (2020), we’ve learned how to make the rendering process differentiable, and turn it into a deep learning module. This sparks the imagination, because the deep learning motto is: “If it’s differentiable, we can learn through it”. If we know how to differentially go from 3D to 2D, it means we can use deep learning and backpropagation to go back from 2D to 3D as well.
Inverse lighting is a hard and unconstrained problem with ambiguities among shape, reactance, and lighting. Nerfs are the combined result of shading and lighting so to recover light from scene we also learn about scene shading.
The rendering equation (published in 1986)
Lets go back to the rendering equation, which describes physical light transport for a single camera or the human vision. A point in the scene is imaged by measuring the emitted and reflected light that converges on the sensor plane. Radiance (L) represents the ray strength, measuring the combined angular and spatial power densities. Radiance can be used to indicate how much of the power emitted by the light source that is reflected, transmitted or absorbed by a surface will be captured by a camera facing that surface from a specified angle of view.
If we solve inverse lighting we also automatically know about inverse shading (called the BRDF).
It is possible to re-light and de-light real objects illuminated by a 4D incident light field, representing the illumination of an environment. By exploiting the richness in angular and spatial variation of the light field, objects can be relit with a high degree of realism.
Another dimension in which NeRF-style methods have been augmented is in how to deal with lighting, typically through latent codes that can be used to re-light a scene. NeRF-W was one of the first follow-up works on NeRF, and optimizes a latent appearance code to enable learning a neural scene representation from less controlled multi-view collections.
Neural Reflectance Fields improve on NeRF by adding a local reflection model in addition to density. It yields impressive relighting results, albeit from single point light sources. NeRV uses a second “visibility” MLP to support arbitrary environment lighting and “one-bounce” indirect illumination.
https://bennyguo.github.io/nerfren/
Source: https://en.wikipedia.org/wiki/Light_stage
Source:
Source: Advances in Neural Rendering, https://www.neuralrender.com/
NeRD is a method that can decompose image collections from multiple views taken under varying or fixed illumination conditions. The object can be rotated, or the camera can turn around the object. The result is a neural volume with an explicit representation of the appearance and illumination in the form of the BRDF and Spherical Gaussian (SG) environment illumination.
https://markboss.me/publication/2021-nerd/
Same authors as NerD: Neural PIL https://github.com/cgtuebingen/Neural-PIL.git - tensorflow
* https://people.csail.mit.edu/xiuming/projects/nerfactor/ * https://github.com/google/nerfactorPhySG: Inverse Rendering with Spherical Gaussians for Physics-based Material Editing and Relighting, 2021
NVIDIA DIB-R++: Learning to Predict Lighting and Material with a Hybrid Differentiable Renderer, 2021
https://nv-tlabs.github.io/DIBRPlus/ Also used for generative code in https://nv-tlabs.github.io/GET3D/
DE-NeRF: DEcoupled Neural Radiance Fields for View-Consistent Appearance Editing and High-Frequency Environmental Relighting, 2023
http://geometrylearning.com/DE-NeRF/
Main limitation of all other methods is the simplified shading model, not accounting for global illumination or shadows. Once the BSDF is obtained, the path tracer can be used to synthesize one-light-at-a-time (OLAT) renderings of the scene
we show results of our method on real scenes of the DTU dataset [10]. We can successfully synthesize high-quality novel views and plausible relighting. This shows that our method is robust to such real world captures, which are very challenging due to the lack of very precise camera calibration and foreground segmentation, camera noise, and other effects that are typically not present in synthetic datasets
- https://people.mpi-inf.mpg.de/~llyu/projects/2022-NRTF/data/video.mp4
- https://people.mpi-inf.mpg.de/~llyu/projects/2022-NRTF/
- NeRF for Outdoor Scene Relighting now has code, diffuse illumination only https://github.com/r00tman/NeRF-OSR
Ours refers to Nvdiffrast here
Same authors as Nerd paper above
- project page: https://t.co/nT1wciK2Ti
https://half-potato.gitlab.io/posts/nmf/ Neural Microfacet Fields - Alexander Mai's Homepage A method for recovering materials, geometry (volumetric density), and environmental illumination from a collection of images of a scene.
abs: https://buff.ly/41JfMmh project page: https://buff.ly/43NImF0
https://github.com/NVlabs/nvdiffrecmc See https://github.com/3a1b2c3/seeingSpace/wiki/Hands-on:-Getting-started-and-Nerf-frameworks#nvdiffrec--mesh-and-light-reconstruction-from-images
- Grounding CLIP vectors volumetrically inside a NeRF allows flexible natural language queries in 3D
abs: https://buff.ly/42nMomv project page: https://buff.ly/42nMpXB
Reference-guided Controllable Inpainting of Neural Radiance Fields
abs: https://buff.ly/3KVoUNX project page: https://buff.ly/3UPwjmq
abs: https://buff.ly/3KVoUNX project page: https://buff.ly/3UPwjmq
The neural network can also be converted to mesh in certain circumstances https://github.com/bmild/nerf/blob/master/extract_mesh.ipynb), we need to first infer which locations are occupied by the object. This is done by first create a grid volume in the form of a cuboid covering the whole object, then use the nerf model to predict whether a cell is occupied or not. This is the main reason why mesh construction is only available for 360 inward-facing scenes as forward facing scenes.
Mesh based rendering has been around long and gpus are optimized for it.
Deceptive-NeRF: Enhancing NeRF Reconstruction using Pseudo-Observations from Diffusion Models https://arxiv.org/format/2305.15171
https://immortalco.github.io/NeuralEditor/
#nerf #vfx more a swiss knife #slowmotion, 3d #stabilsation #stereo and the kitchen sink https://lnkd.in/gaDFaNQq
Retiming, Stabilization, stereo with nerf reconstruction https://dynibar.github.io/
is a meshing algorithm that uses depth maps to extract a surface as a mesh. This method works for all models.
better licensing pytorch https://github.com/3a1b2c3/nerf2mesh
https://half-potato.gitlab.io/posts/nmf/ Neural Microfacet Fields - Alexander Mai's Homepage A method for recovering materials, geometry (volumetric density), and environmental illumination from a collection of images of a scene.
https://github.com/UX-Decoder/Segment-Everything-Everywhere-All-At-Once