-
Notifications
You must be signed in to change notification settings - Fork 183
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feature-request: Smooth monte-carlo #2
Comments
Hi @PhilAndrew If you find that Shadertoy monte-carlo example, please post a link. Something like that I could definitely try because it is already in a language I deal with on a daily basis. :) One caveat with the monte carlo smoothing is that the interactivity decreases while in 'pretty' mode, and when getting out of 'pretty' mode. For example, say I'm flying the first person camera around, everything is 30-60 fps, no problem. Then I stop to inspect an object, ok fine - now we go into photo-realistic high quality mode. My experience at trying naive algorithms such as throwing more samples at the screen, negatively impacts the framerate (understandably so - we were doing 4 bounce depth1spp =4 bounces a frame, now we want 48spp=32 bounces or more a frame). To some degree, this is how all the major path tracers/ renderers do it. So the frame rate drops to 15 or 10fps. So far so good, you wouldn't notice the frame rate drop if you had an indoor static scene. But once you want to break away from this view and continue flying the camera, the time it takes from your initial mouse move to break out of pretty mode and head into real-time flying mode again was annoying and at times, unbearable for me. The interactivity as a whole seemed to go out the window. One of the goals of this project is to keep the frame rate as high as possible. My dream would be something like Brigade 2 or Brigade 3 by Otoy, rather than Cycles or Octane, if that makes sense (although I'll probably never get there because I'm working inside WebGL and it's just me, a hobbyist, but it's fun to dream! lol) . Again, thanks for the paper link and please let me know if you find that Shadertoy example! |
What you did is surprisingly fast anyway, I mean compared to shadertoy's, it seems a little faster than those. |
Here are the links: Not a shadertoy but maybe interesting. For volumetric Direct Light using MIS Also there's always new papers each day... like I found on my twitter today http://graphics.cs.williams.edu/papers/PhotonI3D13/Mara13Photon.pdf |
Hi @PhilAndrew Thanks so much for those links! I was aware of the volumetric example by sjb on ShaderToy because I had borrowed some of sjb's code for sampling spherical and point lights in a volume: https://www.shadertoy.com/view/Xdf3zB I had also seen the MIS ShaderToy example, and I really like the look of it. However, Eric Veach's original math/algo's and the ShaderToy implementation are pretty hefty and a little over my head. I might revisit it though, because it is a very robust solution to all materials/lighting situations. I will definitely check out the rhf(ray histogram fusion) paper; it even has source code (yay!). Regarding convergence speed, before I added Direct Light sampling, the convergence was painfully slow. After adding Direct Lighting (which some of the ShaderToy examples don't really implement), the convergence speed rocketed! Diffuse materials converge almost instantly. Still, the remaining bottleneck for convergence speed is bright caustics shining on diffuse surfaces (Bright light to mirror to wallpaper, or Bright light through glass sphere to floor). But maybe some of the papers you linked to can help mitigate these issues. Yes there seems to be path tracing papers every week! As our computers/graphics cards are getting more capable these days (heck my cell phone can run all the examples on this GitHub repo!), I think real-time demo/games artists are going to slowly move away from the sometimes limited/hacky rasterization pipeline to the path tracing pipeline. Thanks again for the links! |
https://twitter.com/morgan3d/status/865728937527267346 New neural network Monte Carlo smoother |
You should take a look at Metropolis Light Transport before trying to find "fancy" ways of denoising the image. |
Hi @bit2shift I recently implemented Bi-Directional Path Tracing (from Eric Veach, the same author who created Metropolis Light Transport). Check out the new BiDirectional Demo . However, I couldn't implement his full algorithm because A. It was designed on non-realtime CPU renderers back in the 90's and has higher memory demands, and B. I couldn't quite wrap my brain around his "path weighting" details. So far as Metropolis Light Transport is concerned, part of the problem with even getting started with this technique is that it requires a full BiDirectional path tracing pass do be done first and then uses this data as the starting point to mutate the paths for the Metropolis portion. Since I don't even know how to implement the full BiDirectional algo as outlined by Veach, I can't begin with the Metropolis algo. Plus the Metropolis path mutating and weighting is very involved and too hard for me to grasp at this point. Thank you for the suggestion though! I might be able to incorporate some of the over-arching ideas into my GPU realtime renderer. :-) |
@FishOrBear |
https://www.shadertoy.com/view/XlXfDs This is the latest example, converging in a very short period of time. I try to use the iphone to access the page, it can not run, a simple example can run, iOs may be the problem. I am running fine on windows10 chrome 64. |
There are some ways to make the monte-carlo result look nicer in a shorter amount of time, one I see is here. https://benedikt-bitterli.me/nfor/
I saw another on Shadertoy but I can't find it just now, will look later and try to find it, it was a different method.
It would be nice for an image which is not moving to have it smooth faster.
Also it occurred to me that a trained neural network would likely give a good solution for smoothing monte-carlo.
The text was updated successfully, but these errors were encountered: