I’m wondering if it is possible in UE4 to make different cameras with diferent priorities of rendering, in similar way that we can use the depth in Unity.
I want to use it to make 3D interfaces that don’t overlap inside the enviroment. If it is not possible… there is any other pausible approach?
Edit:
One example of this is when (in Unity), the player arms and weapon of a fps is rendered by one camera, and the world is rendered by another camera. The player camera’s has more “priority”, so never the player mesh will be rendered inside a wall mesh.
Another Example. I have an inventory made in 3D, but is rendered by one camera with highest “priority” than the common eyes/world camera, so the inventory never is rendered inside/behind walls, furniture, props, etc etc.
I’d like to suggest an answer but could you possibly provide some more details please? I am not sure what you mean by different priorities of rendering for camera. UE4 can control sort order for translucent (or just do manual sort order) but it isn’t really tied to a camera. Maybe it could be using a blueprint depending on what you want.
What does “priority of rendering” mean to you? Do you simply mean making UI geometry render on top of the world if it intersects with the world? If so that is doable in some ways.
Ok I see what you mean now. We have done similar things in the past too. In UT, first person weapon is separate render layer etc.
I am not 100% sure how to set this up in UE4 since I haven’t done it yet, but I am pretty sure that it is doable (hence adding a comment instead of an answer). I think the basic idea would be to use a camera actually scene capture 2d) using a render to texture target and to layer that texture inside of a custom post process blendable. With this approach, your character weapon and UI would actually be rendered off somewhere in the black distance under the level so as to get a clear render target background etc.
Ok I did a test also. It mostly worked. You will need to go down to the material settings under “Post Process Material” and make sure Blendable Location is set to before tonemapping or you will get some nasty darkness and tempAA jittering. I unfortunately still saw some smearing on the scene-capture part.
The render target idea worked with the caveat that there is no depth information. So to mask, you will need to use the “greenscreen” approach. So maybe ensure there is a solid black background in the R2T scene, and use that absolute black to mask. And then make sure there is no absolute black on the art captured there.
If you need to have the menu stuff parallax or move through the world for any reason you could try synchronizing the scene capture with your player cam or something like that.
Another approach I have for you to try if you want is a vertex shader that actually collapses things to the screen without changing their apparent size at all. It is pretty neat but it will actually jack up how the dynamic shadows are cast onto the world. If there are no shadows on the world from your UI or weapon, this approach might just be way easier and way faster. Render targets are not cheap. Let me know.
That could be useful in a flat screen (monitor) ,but I was thinking on use this UI in a Oculus build that we are bringing to Gamescom of our game. That’s why I’m not using just a 2D UI. We don’t need world shadows in the interface at all, so we surely will eliminate that, but we cannot flat the elements to the camera o bring them too much close of it because the VR experience will be ruined