I am generating tiles layers using an external application that I need to combine into a single texture at runtime in order to display on the floor:
It’s not properly aligned but you get the idea.
To do this here I used OpenGL, blend functions, no depth buffer, and the geometry shader, which obviously is not possible in UE (translucent materials don’t shade, and we have no access to the geometry shader).
Each layer is imported from a separate mesh which sample a tileset like this one, and there can be any number of layers:
If I blindly import these meshes into UE I run into several issues:
- Each tile take 4 vertices and multiplied by the number of layers, it may have an unnecessary impact on performances,
- UE doesn’t support shading for translucent material, and I cant use an opacity mask, the layers must be blended,
- Depth fighting may occur between layers if they are too close.
To solve all this, I need to render these layers into a texture and display this texture in a plane. I will do this only once per terrain chunk, which are generated at runtime, as the camera moves around.
It can be done quite easily in OpenGL, but UE seem to make everything a little tricky (for good reasons im sure).
I am looking to use SceneCapture2D, but this little guy doesn’t seem to have an orthographic setting, and using the entire UE pipeline to render just two meshes in a texture seems like a big waste to me: I don’t need to render the entire scene, post process, deferred shading, depth sort…
There is also CanvasRenderTarget2D, but I am not sure if it can help yet.
I tried to do my homework, but I might have missed something obvious, so what is the best way to accomplish this?