Post Process on a smaller Render Texture and then recombine.

I’m not sure why I’m struggling with this, but here goes.

I have a post-process material that works very well, but it’s quite intensive and I don’t really need it operating on the full resolution. So I’ve attached a SceneCaptureComponent2D to my camera, and I’m rendering to a Render Target (a modest 128 x 128 texture) and using Depth only. So far so good.

I figure the next step is to run a render pass using this render texture and create another 128x128 texture (with alpha) that I then effectively ‘stretch blit’ over my scene render.

However, it’s this step that I’m struggling with. Do I need an intermediary Material? Or do I need to run the render pass as a function in the actual Post Process shader? I feel like I’m overlooking something very obvious and that I’m overthinking here.

Put simply, I want to do a post-process on a lower resolution and then recombine.

Rendering scene again with scene capture from the same point of view as camera to just have lower resolution version of depth buffer is unspeakable waste. Don’t do that. You should look at Rendering Dependency Graph to properly downsample required scene render targets to intended pass resolution, perform post process pass using custom global shader, upsample and composite.

Thanks! That gives me something to investigate.