Dither Temporal AA for Translucency - what am I doing wrong?

I’m creating something for VR that requires a large amount of translucency in the scene. I’m having crazy problems with overdraw using a material with translucent blend mode, so now trying to follow the recommendation of using the Dither Temporal AA node in a masked material to get a much cheaper version of translucency which doesn’t have the same overdraw issues.

However, I’m having two problems with my current setup which currently make Dither Temporal AA useless for a translucent effect:

  1. Translucent objects behind other translucent objects do not appear. In my example scene, I have a box placed in front of a sphere. The box is see-through, but you cannot see the sphere behind it.

  2. There is a kind of dreamy, highly artefacted lag effect going on when moving around the translucent objects, which I cannot seem to eliminate.

Here is a video illustrating both issues:

Here is the super simple material setup:

Both of these issues completely kill the possibility of using Dither Temporal AA as a substitute for translucency, but since this node is a suggested workaround / alternative for a translucent material, I’m assuming I’m doing something wrong with the setup here. What’s the deal?

I want to follow up with a related question about ordinary translucency. My issues with overdraw in an ordinary translucent material could be totally avoided if a pixel with an opacity of 1.0 (maximum, and therefore completely opaque) was rendered in the same way as a pixel from an ordinary opaque material. This does not seem to be the case, and I’m wondering if someone can explain why?

For example, if I place a lot of objects in the scene which have a translucent material but opacity set to 1.0, I get overdraw all over the place, with parts of objects drawn in front of parts they should be behind etc.

Why cannot the renderer recognise that the pixel on the surface closest to the camera has an opacity of 1.0 and not render anything else behind it? It seems like it proceeds through every piece of geometry behind it regardless of that closest pixel value. If there were 1000 pieces of geometry in a row, all with translucent materials set to 1.0 opacity, would the renderer really proceed to calculate a pixel value for each one, rather than dropping out after hitting the first fully opaque pixel? Maybe (probably) I’m misunderstanding something here, but that seems incredibly inefficient and wasteful of rendering time.

If I’m right that this is the case, how difficult would it be to modify the engine so that it didn’t do this?

I’ll try to answer your questions, with my knowledge en RT engines :

  • For the dither temporal AA opacity issue, it’s kind of normal that you have this issue : imagine that you have a checkerboard on your screen where each pixel is alternatively black or white ; the DitherTemporalAA with value of 0.5 in the opacity mask will make that your object is only visible in the white pixels of this mask. And that also applies to the sphere that is behind the cube. The opaque pixels are the same, so the sphere pixels get hided behind the cube one. That’s one limit of the DitherTemporalAA : you can’t easily overlay fake transparent objects.
  • For your overdraw issue : in a deferred rendering pipeline, opaque objects are drawn from front to back (so further objects / pixels can be ignored if hidden), but transparent ones are drawn from back to front (so they can overlay). But geometry of one object should not interfere with an other if their bounds don’t overlap. On a single object you can have artifact where faces seam to swap if they overlap, and that’s because of the z sorting : the face are drawn in the order they are “stored” in the mesh data, in one case your write in the z buffer and you test the depth but only the nearest faces are show, ore you can disable the depth testing but if that furthest face is drawn after it will overlay the nearest.
    But you shouldn’t have objects drawn in front of other event with transparency …

I hope I was able to help your here :slight_smile:

Thanks for your answers.

What you’re saying about the reason I can’t see the object behind makes total sense. I can probably work around that, but the weird dreamy lag is a real problem - any ideas how to improve upon that?

Hmmm… so translucency draws from back to front. Is there any possibility to alter the renderer so that it would draw from front to back (on a per-pixel basis), and therefore that it would stop drawing as soon as the total pixel opacity >= 1 ? Something along the lines of what is described in this NVIDIA article.

Does anyone know how I can totally avoid these problems with parts of meshes appearing in the wrong order?

I have done a test showing the problem. Originally, my tree model was in two parts (one for the trunk, one for the foliage), but I have created a second version where the trunk and foliage have been broken down into separate parts every time there is an intersection, so it has gone from just two parts to 16 parts:

Here is a video showing a camera moving near the two versions of the tree - the original (2 piece) version is on the right, the new (16 piece) version is on the left:

[video]Translucent Trees - YouTube

As you can see, the tree on the right has a problem that parts of the branches supposed to be hidden by the foliage are appearing when they shouldn’t. The new version of the tree has fixed this problem, but still has the problem that parts appear in the wrong order. For example, from the higher angle, the three large foliage sections appear in the right order, but when the camera lowers they don’t change order, so appear wrong.

What is actually going on here and how can I fix it?

I need to be able to have objects in my scene which appear most of the time as opaque objects, but have the possibility of becoming transparent. So I need a translucent material that behaves as an ordinary opaque material when opacity is 1, and doesn’t mess up the appearance of the objects in terms of parts appearing in front of other parts. Is this possible?

Ideally, trees should be setup like this A new, community-hosted Unreal Engine Wiki - Announcements and Releases - Unreal Engine Forums using basically a opacity mask and subsurface color.

By default UE4 is a deferred renderer, which does a horrible job with translucency, it’s best to avoid using it as much as possible. They did add a forward rendering option in 4.14, but it’s not fully featured, and I’m not sure how much better it handles translucency.

You should be using opacity masks or post processes (like this Rendering Occluded actors via Blueprint and Post-Process. - Work in Progress - Unreal Engine Forums ) if you want to see through stuff without using transparency.

I’m not setting up a conventional tree, nor indeed a conventional scene. I’m using a tree as an example here, but the scene is full of all sorts of objects, all of which require this set of conditions:

  1. They can look (and behave) exactly as if they were ordinary opaque materials.
  2. They can become semi-transparent, or totally invisible, on a per-pixel basis.

Let me explain a little more. The basic idea is to have a kind of lens/portal which the player can look through to see the environment in a variety of different ways. I have built a working portal which can alter the materials of the objects behind it on a per-pixel basis. Here is a basic example with the lens in front of a cluster of cubes:

With my setup I can essentially shift any parameter of a material based on whether that part of the object is being seen through the lens or not. It works perfectly. However, since one of the parameters I need to be able to change is the opacity, it means I need to set up the material as translucent as a whole (since there doesn’t seem to be any way to change material blend modes - from opaque to translucent - on a per-pixel basis). The laggy and artefacted version of ‘translucency’ achieved with masked blend mode and Dither Temporal AA seems to me to be unusable unless (as per my first post) I am doing something wrong in the setup and the lag can be eliminated. So I need to figure out why objects, and parts of objects, are rendering in the wrong order when using translucent materials.

I will explore whether there is any difference using the forward renderer.

Alternatively, do you think I will be able to set something up like this using a post process?

The DitherAA translucency hack is not meant for anything this heavy duty. We use it for organic soft transitions between things like rocks and dirt. Yes the same smearing is happening but because the textures are organic and its not covering a ton of screen space, you don’t really notice it.

Changing to forward rendering shouldn’t do anything since translucency is already rendered using a separate forward pass.

Sorting for translucency is done on a per object basis. For individual objects themselves, the sorting order of the triangles is based on the order that the vertices are listed in the meshes index buffer. That means if you have a certain fixed view point, you can actually construct your meshes in a way that has perfect sorting, but it would require some advanced procedural mesh work. If all you have is simple cubes like the above images, you could conceivably solve this using a vertex shader somehow. A general case solution that actually sorts properly and looked like opaque will be pretty hard to come by.

I am not sure where improved translucency sorting lies on our future features roadmap.

If you describe more of your actual use case maybe we can suggest a better solution. If your objects really are cubes that are separate meshes, the sorting should be ok and maybe you should try using Alpha Composite blend mode.

Ah! Had no idea this was going to turn out to be so complicated!

Unfortunately, the scene is a lot more complicated than just cubes - that’s just a proof of concept for the portal/material setup.

Constructing the meshes in order to have the vertices listed in a particular order according to the viewing position might not be impossible since the viewpoint will actually be relatively fixed, but it also sounds like a lot of work and I wouldn’t know where to start.

@RyanB: as an engine developer, do you think it’s even possible to alter the way translucency is rendered so that what I’m suggesting could work? Or do you have any other kind of suggestion for what I’m trying to achieve? My entire project currently hinges on the ability to do something like this!

It is certainly ‘possible’ to change the way translucency is rendered, but its a relatively large engine task to do so. I am not the best person to give a de facto answer on which solution we will end up with and as far as I know, we don’t have it currently on the schedule. The nvidia method posted is interesting but I am not sure how difficult it would be to redo our translucency to act like that.

It’s fairly straightforward to play with ideas like this in self contained shaders. The raymarching stuff I experimented with did use front-to-back order so that I could early terminate based on transmittance (search for a thread on volume rendering in ue4), but that was only solving it in a self contained shader. I think we need somebody like Daniel Wright or somebody else who has done a lot of the engine rendering plumbing to give a full answer.

I’ve never encountered the concept of raymarching before, so I’ll need to look into that.

I’m certainly happier implementing something within a custom shader than trying to make large alterations to the engine - my portal shader above pretty much all takes place within a custom node in a master shader.

So are you saying that in a self-contained shader, it would be theoretically possible to implement a front-to-back form of transparency within an **opaque **material using something like raymarching? Or would it still have to be a translucent material in the first place, in which case, could raymarching solve the problem of geometry order in the rendering?

There is a way. You have to render your meshes twice.

First time, it has to be a opaque material, but with “render in main pass” disabled. You do it to “Custom Depth”
Then, you render the actual transparent thing. In your shader, you do your own “z-testing” using the Custom Depth from the other stuff.
I used this to do transparent cells wich were seriously heavy duty. It has the downside that transparent objects behind transparent objects wont render, but it fully stops self-intersections. For that reason, you only turn on the rendering of the “depth only” mesh when your material is “opaque”

@vblanco: Interesting! I have a couple different ideas for drawing the mesh twice I haven’t tried yet, but what you’re suggesting is new to me. Will definitely have a look, thanks!

How do you render the mesh twice ?

usually by spawning 2 copies of the mesh. If its animated you need to make sure all the animations are being set on the copy as well.

That’s exactly what I’m doing: I have a Blueprint with two meshes, one with an opaque material, the other with a translucent material.

hmmm, i’d really like to see how this is done.

You make 2 copies. one you set to “render in main pass” = false and give it an opaque material. That should be it.