Renderloop - First Steps

Hi All,

I am completely new to Unreal Engine and I would need some advice and/or some pointers to the right direction. What I’d like to achieve is something like this in a loop:

  • render the scene
  • read back the render result as RGB(A) buffer
  • do something fancy with that texture (multipass rendering, ideally some OGL shader code)
  • read back the result and dump it to disc (e.g. as tga image)

So as you can see I need to control the render loop somehow. I am not sure if there are already modules that fit into this WF. Where should I put my hands on in the code?

Thank you in advance,


Most efficient/fastest way would be to use a Blendable Material using Post Process, then just take a Hi-Res screenshot from the editor itself. Easily the most flexible/optimized method.

You can get any of the GBuffers in a post-process material and perform operations on them before they are drawn back to the screen again. You can write HLSL functions in there, but if you want to write custom Open GL functions you’ll have to create them as nodes in the engine source for the material editor, if you want to use this method. Depends how much you really want to do to it. What specific effect are you trying to achieve?

UE4 Documentation Documentation on Post Process Blendables

Thank you for your reply! How is the performance of such a high res screenshot? It would be nice to stay within 60fps regions… What I have in mind is some AR application where I also take into account the lens deformation of the camera. Thus in the postrender I would need to remap the output buffer to a deformed shape (e.g. barrel). I also might want to apply some blur on the pixels. From the link you sent I’m not sure if I can remap the output texture on a different geometry… (for the deformation purpose).
BTW, where do I have to look (some plugin?) to get a webcam stream into the engine and map that as background?

Thank you!

Hmm, both of those things (the re-shaping and the blur) should be possible inside the material shader/post process material itself. Ripping the image from the GBuffer and sending it to an external program would be hella-expensive, more so depending on whats being rendered at the time, so you’d definitely not be able to hit 60FPS I would have thought.

Jeremy Baldwin posted a free tutorial a little while ago on adding lens distortion to a texture based on it’s UV’s. Theoretically, you can apply the same math (with some tweaks) to the UV input of the Scene Texture node, and distort the image in the same way. Here’s a link to Jeremy’s tutorial:

As for getting a webcam stream into the engine, I haven’t come across one myself! Worth posting in the ‘Engine & Github’]( section of the forum to see if anybody has one yet!

Hope this helps!