Hello! I’m trying to understand the feasibilty in UE of a layered multiple render from different cameras, one on top of each other. Here you can see a screenshot from a Kerbal Space Program (which uses Unity3D) game presentation:
The goal is to be able to render objects of really different scales in a complex simulation (eg. a real-scale satellite across the Solar System).
In Unity3d (and also in a in-house project I’ve been working for the past 8 years), this is done through a series of functionalities which seem to be missing in UE:
- “depth only” clear flag on cameras,
- tell a camera which objects to render (through layers),
- render from multiple cameras in the same frame,
- controlling layers from blueprints (this could be avoided).
One possible solution I’m trying to investigate is to use SceneCapture cameras and Render Targets and mix the generated textures in a Material (or Post Process) using depth information in order to obtain the final composition.
This seems to me not so straightforward and someway limited (not to mention performances), and Unity3D approach is more powerful and easy to setup.
I already saw other people asking for something similar, but all questions are quite outdated and hopefully something changed in the meanwhile…
Has anyone a better idea to solve this problem? This could really be a show-stopper in terms of adopting UE in our projects.
Thank you in advance,
Christian