Peformance problem when rendering to textures.

Hi,

For a simulator we want to use multiple projectors to project to the inner side of a dome.
The first step would be to render the world to 4 textures. But when doing so, the performance is already far below what we need.

I attached a UE4 projects that has nothing in the world except a CameraRender object that takes care for the rendering to 4 textures.
These textures are big indeed. But on my system the frame rate drops to 16 fps! Without the CameraRender object is is 120 fps (and probably more than that).
This is on a Labtop with K3100M graphics chip, so I do not expect very much out of this. But I did not expect this large fps drop.

What am I doing wrong? Can I somehow improve this? We really would like to use Unreal for this project, but with this fps drop it will probably be impossible.

Any help to improve this is very welcome.

Rendering to Texture isn’t cheap, that’s just the way it is. You might be better off creating your own Gbuffer in code and using those instead, it’ll be far faster than writing an image in real-time.

If you have no rendering programmers able to do so, you could also explore rendering the scene via multiplayer and split the load over a couple of machines. The advantage of that approach is that it’s very scalable, and in fact you probably wouldn’t need to use RTT at all.

I’d imagine it’s because you’re essentially rendering the scene to multiple viewports with the same quality as you normally would to only one viewport.

On a console game, usually with split screen you’d lower draw distance and other tweaks to improve perf. There’s quite a bit of overhead just do do visibility culling nowadays.

As TheJamsh said, go for multiplayer solution. If you can afford 4 projectors and room for dome, getting 4 medium range pc should be not too steep.

With multiplayer you could try to run 2 (or more) copies of application on single pc. Its very flexible.

Best would be mix of those 2 ways to get best of those 2 worlds.

Hi TheJamsh,
Can you explain this a bit more? Aren’t writing to a Gbuffer and texture buffer the same in performance with modern graphic cards?

In this multiplayer solution, how can I be sure that everything is rendered exactly the same?
For a standard multiplayer it doesn’t matter if particles effects or trees for example are not exactly the same for each player.
But in my case you will notice this immediately if such a tree or particle effect is present on the edge of two neighbour screens.