-
Notifications
You must be signed in to change notification settings - Fork 53
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How to improve image quality by monosdf rendering #77
Comments
Do I need to adjust the number of layers in the rendering network or the number of sampling points in the configuration file? |
Hi, the mesh looks reasonable. Did you use per_image_code in your training? |
|
The "image_code" here seems to be just an index input, which would only improve the training views and not enhance the test views. |
Hi, the per-image-code is proposed in nerf-in-the-wild paper can could model large appearance variance. It's true that it can't improve over test view since we don't have the per-image-code for the test views. |
Thank you, are there limitations when using larger multi-room scenes, such as network forgetting issues? How should we go about solving this issue? |
Hi, in this repo, we sample rays from a single image at each iteration since we use monocular depth loss and the rays should come from the same image. If the scene is big, the model might have forgetting issues. Might be better to adapt it to using rays from multiple images e.g. 16. |
have a multi-room scenario, using 400 images for reconstruction with MonoSDF. The rendered new viewpoints only achieve a PSNR of 21. How can I improve this?
The text was updated successfully, but these errors were encountered: