I’m trying to understand the feasibilty in UE of a layered multiple render from different cameras, one on top of each other.
The goal is to be able to render objects of really different scales in a complex simulation (eg. a real-scale satellite across the Solar System).
In attachment, a screenshot from a Kerbal Space Program Game presentation, which uses Unity3D.
In Unity3d (and also in a in-house project I’ve been working for the past years), this is done through a series of functionalities which seem to be missing in UE:
- “depth only” clear flag on cameras,
- render only certain layers from a particular camera,
- render from multiple cameras in the same frame,
- controlling layers from blueprints (this could be avoided).
One possible solution I’m trying to investigate is to use SceneCapture cameras and Render Targets and mix the generated textures in a Material (or Post Process) using depth information in order to obtain the final composition.
This seems to me not so straightforward and someway limited, and Unity3D approach is more powerful and easy to setup.
Has anyone a better idea to solve this problem? This could really be a show-stopper in terms of adopting UE in our projects.
Thank you in advance,