Implementing an effect requiring multiple render targets and post processing passes


I am trying to implement an effect that at a high level works as in the attached image (render_setup.png).

To describe the image in words:

  1. Render all the scene geometry using the standard rendering pipeline into “scene color render texture”
  2. Render a subset of the scene geometry (e.g 2 boxes) using the standard rendering pipeline (but different materials to step 1) into “custom color render texture 1”
  3. Render the same subset of the scene geometry using the standard rendering pipeline (but different materials to step 1 and 2) into “custom color render texture 2”
  4. Transform custom color render texture 1 using a fullscreen effect in a post processing material
  5. Composite the result of step 4 with “custom color render texture 2” and “scene color render texture” using a post processing material and have the result displayed to the screen

In attempting to do this I hit 2 major problems:

  1. In the first post processing pass I have access to what I have called “scene color render texture” through SceneTexture:PostProcessInput0. This has been overwritten in the second pass by the result of the first pass so no longer available for compositing. I don’t want to capture the entire scene using a SceneCapture2D to have this input in a texture because that would involve rendering the entire scene twice, which is prohibitively expensive.
  2. I don’t have any facility to create or render into what I have called “custom color render texture 1” or “custom color render texture 2”. I guess I could potentially use the custom depth buffer for one of the operations and lose my color information but that would still leave me a render texture short.

Is what I am trying to do even conceptually possible in Unreal 4 (with or without engine modifications) or should I just abandon this? If this is only possible with engine modifications would they be stable enough to actually use in a shipping game?



Hey Phil,

I know that this was posted a long time ago, but I’m having the same problem.
Did you ever found a solution?


What’s the desired end goal for the effect? It might be that there are other ways to achieve what you want to do.

So far I can’t see any way around this other than capturing the full scene several times a frame, which is going to chop your frames up quickly. It might be that you can write a custom GBuffer solution that only renders a particular layer of the scene and doesn’t require duplication for each frame.

I’m trying to make a custom AA for our game, so I have to find all the edges and then calculate the blend weights. It’s bassed on the SMAA from the crytek engine. I have got the edge detection down but i need to render that image and save it somehow other wise in need to calculate it every time I need to find the end point of an edge.
I made a thread about it some time ago here