I’m not sure why I’m struggling with this, but here goes.
I have a post-process material that works very well, but it’s quite intensive and I don’t really need it operating on the full resolution. So I’ve attached a SceneCaptureComponent2D to my camera, and I’m rendering to a Render Target (a modest 128 x 128 texture) and using Depth only. So far so good.
I figure the next step is to run a render pass using this render texture and create another 128x128 texture (with alpha) that I then effectively ‘stretch blit’ over my scene render.
However, it’s this step that I’m struggling with. Do I need an intermediary Material? Or do I need to run the render pass as a function in the actual Post Process shader? I feel like I’m overlooking something very obvious and that I’m overthinking here.
Put simply, I want to do a post-process on a lower resolution and then recombine.