Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

SVGF Implementation Discussion #511

Open
gkjohnson opened this issue Jan 28, 2024 · 3 comments
Open

SVGF Implementation Discussion #511

gkjohnson opened this issue Jan 28, 2024 · 3 comments

Comments

@gkjohnson
Copy link
Owner

#292 (comment)

I would like to contribute an SVGF implementation.

I have implemented the temporal filter and spatial filter parts, but I would like to ask for advice on how to get the pathtracing frame buffer with the direct diffuse texture removed, and how to combine the direct diffuse texture + denoised frame buffer.

If you have any advice, I'll try to organize and post a PR. Thanks for sharing your great library!

Adding an SVGF implementation sounds great. Can you explain the how this work work? Are you wanting to reproject the pathtraced colors as the camera moves? From what I understand SVGF won't work well with antialiasing, stochastic transparency, or depth of field effects, is that right? I guess I want to understand the scenarios in which this would and wouldn't work.

I would like to ask for advice on how to get the pathtracing frame buffer with the direct diffuse texture removed, and how to combine the direct diffuse texture + denoised frame buffer.

Can you elaborate on what exactly you need for the SVGF implementation? You need a pass with no textures and no specular or transparency surfaces? Right now the path tracer only supports outputting a single final image. The bsdfEval function evaluates the ray scattering and color contributions of the materials. It would probably be possible to output each specular, & transmissive separately from that function along with weights so they could be combined separately - or at least saved as separate textures.

cc @KihwanChoi12

@KihwanChoi12
Copy link
Contributor

KihwanChoi12 commented Jan 29, 2024

Adding an SVGF implementation sounds great. Can you explain the how this work work? Are you wanting to reproject the pathtraced colors as the camera moves? From what I understand SVGF won't work well with antialiasing, stochastic transparency, or depth of field effects, is that right? I guess I want to understand the scenarios in which this would and wouldn't work.

I will explain how they work in the order of algorithm application below.
For svgf, it is common practice to use TAA after getting the final result, and the connection with depth of field needs more investigation. Here's a scenario where it doesn't work well
SVGF introduces temporal blur, which means that even when the light source is turned off, the light is still present or glossy highlights leave traces.

Can you elaborate on what exactly you need for the SVGF implementation? You need a pass with no textures and no specular or transparency surfaces? Right now the path tracer only supports outputting a single final image. The bsdfEval function evaluates the ray scattering and color contributions of the materials. It would probably be possible to output each specular, & transmissive separately from that function along with weights so they could be combined separately - or at least saved as separate textures.

I would like to start with a simplified version of the SVGF algorithm, similar to the one implemented in the following repository (https://github.com/TheVaffel/spatiotemporal-variance-guided-filtering).

  1. does not �separate between direct and indirect lighting.
  2. the temporal filter uses the same implementation as in the following PR (Feature: Temporal Resolving #241), but improves on the ghosting and 1px darkening later.

The inputs are

  • Use the rendered result of direct diffuse, indirect diffuse, and indirect specular. (except direct diffuse texture)
  • normal texture, depth texture (use the rasterized result).
    The output of the Denoiser is as follows
  • Outputs the denoised frame after applying a temporal filter and a spatial filter to the input frame.
    (Note that The temporal filter and spatial filter allow you to apply both or just one.).
    Combine the output of the Denoiser with a frame with a direct diffuse texture (albedo frame) to generate the final result.

I've attached a screenshot below that is not a simplified version, but should give you an idea.

스크린샷 2024-01-29 오후 6 10 36 스크린샷 2024-01-29 오후 4 59 54 스크린샷 2024-01-29 오후 5 00 25 스크린샷 2024-01-29 오후 5 00 36 스크린샷 2024-01-29 오후 5 00 44 스크린샷 2024-01-29 오후 5 01 12 스크린샷 2024-01-29 오후 5 01 33

Reference: Real Time Path Tracing and Denoising in Quake II RTX (https://www.youtube.com/watch?v=FewqoJjHR0A)

cc @gkjohnson

@gkjohnson
Copy link
Owner Author

Thanks for the references! And sorry for the delay - I wanted to take a look at the video and make sure I generally understood the approach.

From the video and the diagram, though, it looks like transparent / transmissive surfaces aren't handled using SVGF at all, is that correct? I'm wondering what your plans are for these surfaces. Will they be ignored and remain noisy? Ie only opaque surfaces will be denoised?

Use the rendered result of direct diffuse, indirect diffuse, and indirect specular. (except direct diffuse texture)

It sounds like getting these textures would be the next steps which can be done via MRT. Does modifying the bsdfEval function to output a set of weights and output color (or color premultilied by the weight) from each lobe / sampling path (specular, diffuse, etc) sound reasonable?

@giovanni-a
Copy link

Cannot wait to see this implemented 🤩

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants