Render semi transparent meshes into a texture

I am generating tiles layers using an external application that I need to combine into a single texture at runtime in order to display on the floor:

Layer1 (concrete):
Untitled-3.jpg
It’s not properly aligned but you get the idea.

Layer2 (planks)
Untitled-4.jpg

Desired result:
Untitled-5.jpg
To do this here I used OpenGL, blend functions, no depth buffer, and the geometry shader, which obviously is not possible in UE (translucent materials don’t shade, and we have no access to the geometry shader).

Each layer is imported from a separate mesh which sample a tileset like this one, and there can be any number of layers:
Untitled-6.jpg

If I blindly import these meshes into UE I run into several issues:

  • Each tile take 4 vertices and multiplied by the number of layers, it may have an unnecessary impact on performances,
  • UE doesn’t support shading for translucent material, and I cant use an opacity mask, the layers must be blended,
  • Depth fighting may occur between layers if they are too close.

To solve all this, I need to render these layers into a texture and display this texture in a plane. I will do this only once per terrain chunk, which are generated at runtime, as the camera moves around.
It can be done quite easily in OpenGL, but UE seem to make everything a little tricky (for good reasons im sure).
I am looking to use SceneCapture2D, but this little guy doesn’t seem to have an orthographic setting, and using the entire UE pipeline to render just two meshes in a texture seems like a big waste to me: I don’t need to render the entire scene, post process, deferred shading, depth sort…
There is also CanvasRenderTarget2D, but I am not sure if it can help yet.

I tried to do my homework, but I might have missed something obvious, so what is the best way to accomplish this?

Have you looked into Decal Actors to help you out with this?

I didn’t considered decals, but it look promising, thank you.
I’ll experiment with those and post my progress.

As far as I understand, I would need to have one decal per tile, and there can be thousands of tiles in a terrain chunk, if you count all the layers.

Of course I could combine the tiles into a single texture, but that’s exactly what I try to do with SceneCapture2D, and then there is no point in using decals since I can render this texture in a plane.

“UE doesn’t support shading for translucent material” This is simply not true.

What I meant is that translucent material shading and shadows don’t behave like opaque mesh, which make it useless for my purpose. I would be very interested if you can show me the contrary.

Static shadowing from static lights is currently not handled for lit translucency. However, dynamic shadows from stationary lights is supported.
Lit translucent surfaces are missing direct specular.
Lit translucent surfaces get all their direct lighting through the translucency volume lighting texture, which causes them to be lower resolution than needed for most surface materials (glass, water).

That document is bit old. Now you can forward render translucent and get per pixel specular. Also you can get SSR for translucent. Rest are still true.

You seem to be overthinking this. If all you want to do is blend those two layers then you can easily do that in a material:

Oh that’s nice to know. I should experiment more with translucent materials then.

To do what you suggest, I would need a texture per layer with all the tiles in it, which I don’t. I just have a tileset and the layers meshes that give me the tiles coordinates: Untitled-6.jpg&d=1440528266
Of course I could generate those layers textures in my extern application, but that would mean spending several Gb in floor textures…

I would probably use a blueprint that places down instanced static meshes based on the input locations.

Can you export a string containing all of the positional data? You could fairly easily make a blueprint script that parses the string to read positions. You would simply make an editable variable and paste the string into it on the BP. Of course more advanced methods are possible but this would be easy to try.

Well yes I can import the tiles positions and UV from a Json and generate static meshes, but I already have those meshes in UE (generated from my external program):


The problem is that I don’t want to just put one layer mesh above another, because I cant use translucent material, and it would be a waste of vertices anyway.

For the record, I am making good progress with CanvasRenderTarget2D, and I just hope that the cost of calling UCanvas::DrawTile for each tile is not too high. Otherwise I’ll fall-back to SceneCapture2D, unless someone has a better solution.

If translucency is your problem why not use masked?

Because I need those values in the middle to blend the edges of the tiles nicely. And the artist would be mad if we only have 0 or 1 for the alpha :stuck_out_tongue:

The final material on which I will apply the texture that combine the layers will be masked by the way.

if the drawtile thing ends up not working, look into a technique called “texture bombing”; it is possible you could parameterize the grid locations for your shader somehow and look it up that way.

Interesting, so the idea would be to render a plane and manipulate its UV to sample the correct tiles. I’ll definitely look into that.

So CanvasRenderTarget2D allowed me to make things look like exactly as I wanted, and the performances are optimal since there is just 4 vertices the the shaders are as simple as they can be:

The only drawback is the generation of the textures and the calls to UCanvas::DrawItem. It is not that slow, but my levels will be pretty large and have a LOT of tiles, and I probably can’t generate all the textures at the start of the level.
Is it possible to call UCanvas::DrawItem in a thread, and will it have any benefit? I don’t want the game to freeze for 1/4 second when a new terrain chunk is generated. In the worse case I could generate a few tiles every frame, to make it unnoticeable.