I appreciate the speedy reply, let me take these a piece at a time and see if I can come up with a solution:
- The engine has to allocate a new render target and set it up, which takes time and memory
But that should only be happening at game start, when the render target is created, yes? This is a performance decrease that persists indefinitely, even if no new actors (meaning no new RenderTargets) are generated.
- The SceneCapture2D has to render the scene from its perspective, which can include additional rendering calculations and data transfer between CPU and GPU.
True. I suppose that’s an unavoidable cost, though is the overhead of this really so high that it can add 2+ms of render time per SC2D?
The engine has to do additional post-processing on the render target, such as filtering or encoding, which adds extra cost.
I think this may be the cause of the problem… that there’s a cost associated with taking the rendered data and encoding/storing it as a texture, and then accessing that texture. Whereas the normal rendering process doesn’t do this, it just keeps the data in the GBuffers.
One thing you can try to optimize the performance is to adjust the Render Target settings to minimize the overhead. For example, try using a smaller render target size or lowering the depth precision
This has no effect. I can set the Render Target size to 2x2 and there’s no (or negligible) impact on performance. It’s a fixed overhead cost of the SC2D running, or evidently so. No matter how much you strip out of the SC2D, there’s a performance hit when it ticks, whether you render basecolor to a 1024x1024 texture with alpha or you render depth-only to a 64x64 red-only target that hit doesn’t change much.
I think you’ve got it figured out though, that the discrepancy is not in the rendering itself, but the taking of that render and making an accessible texture from it.
I wonder if there’s a way to do what I’m doing without that step from the GBuffer directly.