Draw Rendertarget to other Rendertarget - GPU computation


I’m trying to create a setup to do some computations on the gpu for performance reasons. I have a data Rendertarget which I want to update every frame to do shader computations on it. As a basic setup I just wanted to copy the existing color information of the previous frame and darken this image a bit. The expected outcome should be a colorful texture that slowly darkens with each frame. Eventually I would compute something more elaborate.

So, now for my rendersetup. I created a scene capture 2d and a rendertarget. The show flags for the scene capture ignore everything in my scene. All I’m rendering with the capture is a single blendable, a post process material. This material is my “computation shader”. It takes a data texture (which should be the texture from the previous frame), darkens it a bit and returns it as emissive. My plan was to set the RT as the target texture for my scene capture and also as a texture parameter input for my material. So essentially the RT gets darkened in the material, this darkened image gets drawn to the same RT, which, in the next frame, gets darkened again by the material.
To debug this system I added a button that would clear the RT with a full rect image. Once the button is released the computation would resume and gradually darken this image over time.

This test didn’t really work. I’m guessing because you can’t render onto the same texture that you’re trying to access. The image was displayed as expected, but the darkening effect didn’t work. The texture seemed to reset to its default color (black) as soon as the next frame started:

So I tried an alternate setup. This time with two rendertargets. RT1 would render onto RT2 and next frame they would alternate, RT2 would render onto RT1. This worked better, however, the image dramatically lost detail over time. It seemed as if the wrong mip map level was chosen each frame, resulting in the reduction of detail in the image:

(Mip maps are off both in the material as well as the rendertarget)

So, I’m asking if someone might know what could be the problem here. Maybe someone has done something similar?

Just for completeness sake, here is the material and blueprint I used:

First a note: There will be a new engine feature in 4.13 that makes this about 10000x easier. You will get the ability to render directly to render targets without needing a scene capture.

Now if you want to keep experimenting with this build: I got this to work a while back but not using a blendable so I bet that is your problem.

Usually you cannot write to and read from the same RT at the same time (the new feature I mentioned is indeed like this), but doing it this was is kind of a special hack that shouldn’t have that requirement.

I did this by placing a material on the quad that actually had the render target texture in the material, so every frame it was reading itself from the previous frame. I had trails that were spreading and then fading by doing some averaging at the same time.

The tricky part doing it this was is then initializing the texture since it will always be left in the last state. In my example it wasn’t a problem since I was rendering a ‘force’ into the material every frame (by just having a sphere that I dragged around and it was captured by the scene capture). You will have to do something similar where you have another plane floating just above with a “clear” material on it that gets hidden after one frame. So every time you begin play that ‘clear’ plane will take priority and render into the RT and then from then on the RT will be doing a circular ‘hall of mirrors’ type effect where it stacks things up and reads its own RT. Or you do *=0.99 to have it slowly fade to black.

That sounds great! Will we be able to write to (multiple-)render-targets from a material graph?

Sorry for undigging this old thread, but i am in a similar situation as the OP, and just want to ask, what exactly this feature is, that you have mentioned here. What i want to do, is, i have two render targets, that i merge in one material (add, in my case, and some other math). However, additionally i want the result of this merge to be written back in one of the render targets, so that i can use this result in the next frame for another merge.

Lets say, render target 1 gets new information every frame from the scene capture, and rendertarget 2 will be used to store the sum of both render targets. So i would need first read the render targets, let the material merge them, then write the result back into the same render target 2, that i just have read.

I want to raise the topic. what for new engine feature in 4.13? and how you can implement it now.