How do I run calculations for a post process material on a lower resolution and composite the result with the full resolution frame?
I’m trying to create a ray march volumetric post process effect (that relies on the scene color and depth). It’s a costly effect, so I’d like to be able to run it on a scaled down version of the screen, maybe 1/3 or 1/4 resolution.
I’m thinking I need to do something with the RDG to grab the result of the main pass and downscale it. Run my effect in a compute shader and output to a render target. Then, in a post process material, lerp between the scene color and my effect render target. Are there any examples of this out there? I’m not sure if there’s a cleaner way to do this, or if I’m even on the right track.
Hi SeanInd,
I am having the same problem here. I wonder if you did happen to find a solution for this?
Thanks!
Niagara grids can be used if I’m not mistaken. They can both read from gbuffer and write to render targets. Presumably this could be done at any resolution, although I haven’t tried it.
Interesting… Thank you! I will try it!
I made a video about an engine modification that does this : https://www.youtube.com/watch?v=K398K2VWSxQ
Hey, thanks for posting the video.
Since I posted this over a year ago I learned a lot about the render dependency graph and the options available for my specific use case.
What I ended up doing is creating a SceneViewExtension that renders the effects in the prepostprocess_renderthread function. I had to write my own scene depth downscaling compute shader that allows you to arbitrarily reduce the resolution from native. The 1/2, 1/4, ect are more efficient and cleaner for downscaling, but with my use case, I found that the perf cost of allowing non integer downscaling is worth the quality and flexibility.
After raymarching the volumetric data, compositing with the native res scene color was pretty difficult, and I don’t think my solution is great yet. Depth aware upsampling is pretty complicated, and I still haven’t found the right implementation yet. I ended up computing world space distance weights for the subscale effect pass samples relative to the native res depth samples, and added some blue noise offsets to help spread information around and allow TAA/DLSS to converge better.
This was a helpful blog post that I found on the topic of depth aware upsampling
depth_aware_upsampling.md · GitHub
They have added a more native way of doing this, too. I bit clunky though.