UE4 still does not have any proper way to do tinted glass

Hi,

so for ages, UE4 did not have any proper way of making tinted glass. There was an ugly workaround of using SceneColor node in the emission slot, however this has issue of ignoring all the translucent materials behind it, so it’s usable only in some cases. On top of that, SceneColor is ignored when RayTracing is enabled, which makes it impossible to create tinted glass with RayTracing.

Am I missing something or:

  1. There is really no way to make tinted glass which doesn’t ignore other translucent objects behind it.
  2. There is really no way to make tinted glass which works with Ray Tracing?

What we really need is a modulate blend mode which supports Lit Surface TranslucencyVolume.

Thanks in advance.

Not sure about ray tracing, but tinted glass is one of the few things that really look best in unreal.
Check out this stuff.

I’ve seen that, they are using a ridiculous workaround of having two separate glass meshes, one for the non tinted translucent material and one behind it for tint based on modulate shader. That will work in a context of something like one-trick pony VFX shot, but it’s unacceptable solution for any game stuff. You can’t double the amount of your meshes and have the so close they will most likely cause Z fighting at distance just to pull off a tinted glass shader.

You also can’t have artists having to create inner modulate material mesh for all the meshes that could possibly have some transparent glass. It’s just not realistic. If anything it shows what a painful route had The Mill artists go to even finish the project.

They in fact had to use 4!!! faces in the end for each glass surface, because double sided translucent materials in UE4 aren’t even capable of proper depth sorting.

It’s not a work-around, it’s how it works, and it absolutely works in game without any z-fighting.
It also makes 100% sense to have different layers. it’s exactly the same as you would for reflectors.

Not at all… It just doesn’t make sense. I mean really? Having inner shell for every possible small glass part of a vehicle that’s supposed to have tinted glass? Workarounds are not a solution. Solution is simple, having a shading mode which can do tinted glass. There’s absolutely nothing preventing that in realtime graphics.

It’s not just about having a solution, its about having a good solution. Everyone can come up with some sort of solution sooner or later, but not every solution should be implemented, if it means complicating the process.

Sure, if you have 500 classes in your project, you could have blueprint with 500 if statements to check if it’s the class you are looking for, or you could do a single cast. Both are solution to the problem, but one is acceptable while other one is not.

So, it would certainly be cool if there was a more streamlined approach, but this seems a bit like an extreme overreaction on several dimensions.

There won’t be Z fighting, since (as you noted below) translucent materials aren’t even sorted via a Z buffer (and also there are plenty of built-in features, like PDO or the camera offset node, that can prevent Z fighting from even being a problem in the first place).

And drawing geometry multiple times per frame is already an inherent part of the deferred renderer, so this is hardly something that will have a significant effect on performance. I can’t see how having a “proper” way of doing this probably would actually make a difference.

What part of this is “unrealistic”? It’s just a question of duplicating all faces with a particular material; I think almost all modeling packages can easily be scripted to do this automatically on export.

But is this solution really that bad? I can understand how having a dedicated shading model for this might *feel *“cleaner,” but I’m not convinced that it would be significant in practice.

More importantly, there are plenty of translucent effects that the engine can’t do *at all, *so I’d be surprised if Epic wanted to prioritize something that can already be accomplished in a way that is, frankly, not that unpleasant. It’s a “workaround,” for sure, but then again, so is deferred rendering itself.

I’m vaguely terrified to know in what situation 500 if statements could possibly do the same thing as a cast.

Do I really, seriously need to explain why creating unique, separate complex geometry with two separate materials is a worse solution than adhering to the PBR standard where base color defines transmission color for transmissive materials?

I do get that not everyone needs to be effective at what they do, but I do. I can’t afford to spend exceptional amounts of time to achieve mediocre things.

Let’s say I need to do following: A car has windows and tail lights. Windows need to be dark green tinted glass and tail lights need to be red tinted glass.

Normal, acceptable workflow:

  1. Create window material by creating Lit Translucency material with translucency tinted to green
  2. Create tail light material by creating Lit Translucency material with translucency tinted to red
  3. Apply window material to car window
  4. Apply tail light material to car tail lights

Current workflow:

  1. Get back to your DCC, assuming you are lucky enough it’s your model and not a bought/received asset
  2. Spend time selecting all the window geometry
  3. Duplicate it
  4. Offset it
  5. Assign it new material ID
  6. Spend time selecting all the tail light geometry
  7. Duplicate it
  8. Offset it
  9. Spend more time dealing with the mesh self intersections induced by offseting mesh of complex curvature along the face normals
  10. Assign it a new material ID
  11. Bring the asset back
  12. Notice you are no longer able to take advantage of automatic LOD generation as decimation of 2 thinly neighbouring surface is imperfect so tint layer sometimes clips into the glass layer
  13. Create glass material by creating Lit Translucency material
  14. Create green tint material by creating green modulate material
  15. Create red tint material by creating red modulate material
  16. Assign glass material to glass pieces
  17. Assign red tint material to inner tail light layer
  18. Assign green tint material to inner window layer

I can’t comprehend what’s in it for you to defend clearly inferior and ineffective workflow. Why would you advocate for not improving the engine? There is absolutely no benefit in treating one, solid glass medium as two separate objects. It has tons of downsides and corner case issues with literally 0 benefits.

what’s wrong with doing it this way

Because you do not want to edit opacity. In the context, opacity should be 1. But unreal ties opacity and accumulated distortion, which renders you unable to have any meaningful shading on objects with transmission.

In any case, what OP is asking is largely impractical, as to get it working, you would need to accumulate tint color from all refractive objects, the same way, distortion is accumulated, which is phat.
Gotta settle on using scene color to emissive.

  1. Does not work with ray tracing
  2. Does ignore any translucent objects behind

I don’t think you understood my post. I did not that there aren’t better ways it could work. In fact, I said the opposite. I did say that I feel you’re exaggerating the degree of inconvenience present in the current way of doing it, and I still believe that. The current workflow is not as bad as how you present it.

Like I said, if the window and tail lights already have separate materials on them, this can easily be scripted in any modeling package so that it can be done with a single button click. Sure, that requires work, but if it’s something you’re going to be doing regularly enough that doing it by hand is genuinely unacceptable, it seems like the benefit of making a script is more than worth it.

Like I also said, if self-intersections between the translucent geometry causes a problem, use the camera offset node on one of the layer materials instead of actually including the offset in the geometry. Confirm that self-intersections are even a problem first, though, since I’ve never run into that in this situation, and I don’t think you will either given how depth sorting works with translucency.

Again, this is pretty easy to script if you know this is something you’ll need to do more than once.

I believe I addressed this very clearly and specifically in my previous post, but I’ll try one more time:

I’m *not *saying this workflow isn’t inferior to one in an ideal world in which this is handled without the user expending any effort. I am saying that there are plenty of other things, even specifically related to translucency, that are significantly *more *challenging, or even impossible, to achieve with the current tools. “Improving the engine,” unfortunately, isn’t a binary proposition. Improving anything necessitates making a choice about what, specifically to improve.

Personally, I’d rather see attention given to things that can’t be done at all than things that can already be done, even if the way to do them is fairly awkward. I’m not even trying to give an opinion or make a value judgement here, though. Like I said, I’d simply be *surprised *if Epic decided to prioritize this particular issue over any others.

I would not. Since the usefulness of ray tracing currently lies mainly in the visualization market, to which Epic is trying to rapidly expand, given the recent developments as well as some items on the 4.24 roadmap, sooner or later more and more people will get bitter about the inability to raytrace any kind of colored glass, especially in the visualization market.

You also keep mentioning script to automatize those tasks. Despite the fact it will still fail in many cases (fluid simulation of pouring wine for example), it’s still a bad solution. It adds tons of workflow overhead which needs to be constantly managed and manually reviewed to cover corner cases. It’s just not possible to employ overcomplicated and fragile solutions in production environments.

I mean look at how they were using The Mill human race car demo to sell all the great aspects of the realtime rendering to the public. How everything is suddenly interactive, and realtime, and cool, yet The Mill continues using offline rendering for vast majority of their jobs. It’s the overcomplicated, convoluted solutions this tinted glass thing is a prime example of, which are the reason it just doesn’t pay off, despite all the realtime benefits. The artists’ work time overhead on engineering and performing tons of unnecessary workarounds is way more expensive than bunch of machines just crunching frames using offline renderers.

I am also the opinion that tinted glass is quite complicated to setup right now, plus that the method you can use are either not working with Emissive behind it or not working with Raytracing. Also you need to do some hand work of duplicating Geometry, explained in the HumanRace paper.
There are only workarounds for tinted glass right now, not a real solution.

I’m not sure we’re even really disagreeing about most of this, but it remains my opinion that for translucency in general, including the specific case of raytracing glass, there are other issues that are more significant than this one. I think that those issues should, and probably will, be addressed first.

Why would it fail in those cases? I don’t see why the approach of duplicating geometry would not be applicable either to a baked geometry cache or a live simulation.

Keeping in mind the fact that all production environments are not necessarily created equal, I personally do not find the thought of using some custom logic to prepare data for migration from one piece of software to another to be particularly alarming. In fact, in my own experience, it’s nearly ubiquitous. Again, note that I’m not saying that it’s *desirable, *just that it’s not only possible, but often necessary in practice.

I’m just not sure I agree that this tinted glass thing is a *prime *example of why real time renderers haven’t displaced offline renderers. I think there are plenty of other more pressing limitations involved, to say nothing of the fact that studios tend to have a lot of custom infrastructure built around specific offline renderers that would need to be mostly thrown out and re-created in order to support using Unreal instead. There are also plenty of non-technical challenges, such as training and familiarity, that make it challenging to completely replace a widely-used offline renderer with a real time renderer that most people aren’t yet accustomed to.

We’re saying lots of words at each other, but I’m not sure we’re actually communicating about anything. I’m sure Epic’s developers will be able to formulate an opinion on which limitations, bugs, and missing features should be addressed first without either of our input.

The rudimentary script which would be created to create tint material geometry for static meshes would hardly work for simulated fluid caches. Anyone with any DCC familiarity would know that. So it would require some elaborate monstrosity which would cover also a case of animated mesh with dynamic topology. Those are usually heavy so it’s also hardly something you want to be playing back and keeping in memory twice.

I think the main disagreement we have is that you underestimate the importance of having a **complete **basic PBR shading model. Sure there are many more issues that need to be tackled, but it’s really hard to even build on a broken base. In production environments, it’s just very difficult to employ any kind of shading model which does not cover even as common use case as a tinted translucency/refraction.

I’ve been working as offline 3D generalist for about 11 years now, so I’d say I have a good basis for comparison, and quiet a few of my colleagues have tinkered with realtime workflows too, but the consensus is pretty much the same - the main issue being simple stuff requiring ridiculous, time expensive workarounds to achieve. I’ve eventually managed to bite the bullet, and transitioned career into something you could call UE4 technical artist, but most of my colleagues just were not wanting to put up with that. Even I, myself am still spending way more time in UE to ultimately achieve inferior quality, but hey, it pays better :slight_smile:

No. I’ve done it on multiple occasions and it works fine. Let me know if my posting a video would change your mind.

See, this is what I mean about not communicating. You still seem to be responding to a claim that I have noted from the beginning that I am not actually making. You will find no disagreement from me that there are many caveats associated with using Unreal as a substitute for an offline renderer. Yes, doing this absolutely requires weird workarounds, hacky solutions, and approximations. If I thought otherwise, I promise I would have said so earlier.

In fact, I have been and am saying something very near to the opposite of that: that there are so many obstacles to using Unreal as a complete replacement for an unbiased path tracer, many of which cannot be surmounted *even with *workarounds, that the fact that there is a usable solution *at all *to the particular problem of tinted glass means it’s probably not at the top of the list of things that need to be addressed.

Even more specifically, I was merely pointing out that several of the particular claims you made regarding the approach linked to by MostHost LA were factually inaccurate.

I understand the claims you are making, but you still don’t understand a crucial difference between complicated solutions which are justified, and those which are not. I never claimed I am going into UE4 with the expectation to replace a path tracer with it. I do understand difference between realtime rasterizers and offline path tracers. But there’s GIANT difference between technical limitations of the technology and usability deficiencies.

Many of the “weird workarounds, hacky solutions, and approximations” as you put it are necessary since many of the effects just can’t be otherwise achieved with realtime rasterizers, but things like tinted glass is not one of them. A simple additional shading mode, or ability to use two existing shading modes together is completely realistic and there are quite a few other rasterizers out there that show it’s easily possible.

I am talking specifically about a class of usability issues which are not a consequence of underlying technical limitations and yet come at a significant cost in terms of artists’ time.

To put it bluntly, way too many times I’ve heard the something along the lines “It’s complicated because this is a realtime renderer excuse”. My point is that it does not always apply. There are many cases which are as simple as usability design flaws, rather than products of technical limitations. Or in other cases, perhaps products of technical limitations which have been resolved for sometimes up to a decade now.

There’s certainly an interesting discussion to be had regarding the limitations of translucency in Unreal specifically, the limitations of translucency in deferred rendering overall, and the limitations of translucency in real time generally.

And I have no doubt that you do hear from a lot of people who don’t understand that the distinction between those things exists and matters. That said, I certainly don’t think anything I’ve said here really pertains to that topic, to say nothing of it demonstrating a fundamental misunderstanding on my part.

Again, I thought I pretty explicitly and consistently laid out the scope of what I was talking about. If anything I said in particular suggests otherwise, feel free to highlight it so that I can be more clear in the future.

Otherwise, I do think it’s clear to both of us that a) there are lots of interesting discussions to be had broadly regarding translucent effects, both from the angle of workflow and from a theoretical standpoint, and b) we are not having any of those discussions.

You can set up transparency to render in different passes, and that’s precisely how you handle water with things like waves.
It’s fairly complicated, It looks like **** most of the time, but it does work.
The 2 render passes allow you to single out what is visible and what is invisible on your own priority.
This is generally useful when you make waves. The reason being that 1 pass needs to show you what you actually see (normals pointed at camera) and hide what you dont see (the backside of the wave) or sort between the 2.
Without the filtering you end up with holes in the water - those arent holes but back faces displaying in front of the front faces. All of which you would discover when working with an ocean/water body from scratch. It’s also not an “this engine only” thing. It’s how most rendering works. In the end The UE4 workaround is actually a GameGem implementation…
All that to say, you dont necessarily need double the geometry to sort transparencies.

But, in the case of tinted glass you really do.
Its actually how the tinted glass works IRL if you think about it. It’s one layer of coating over the glass.
In the engine you just have to invert it to get it to display properly.
Mostly because the reflection in front of it needs to shine and the layer in between is the shade of tint which you can manipulate/change on the fly.

Particularly for cars - if they are built well - all the windows are already separated/individual meshes so that you can get them to break as needed.
It takes a whole 10 seconds to select a window and edit it out to have 3 layers of the same window mesh at a distance of .1mm (Because my inside needs reflections too, yours might not).
It also adds overall thickness to the mesh, and it can eventually end up allowing you to create the additional rounded edge seam so you can lower a window like IRL.

Sure, those are all things you may never need IF you don’t plan on having the user/player enter a car, but if you set stuff up as detailed by the epic documentation you get a leg up should you ever change the way it works.
That’s mostly why I don’t believe this to be any sort of “hack” but a logical approach to how you can set different shades of tint with minimal work, on the fly.

Additionally, keep in mind what I was saying about the reflectors also extends to front lights and such.
The real life items in this case really do have several “coatings” or layers as well. The refractors/brake lights for instance have up to 3 layers of plastic with different coatings on them. The front/fog lights have at least 2 (front transparent plastic and interior chrome plating).

Is it a lot of work? No doubt. Is it a workaround? Not so much. To get something to look good you would probably have to set things up very similarly in almost anything…

This is where you are very wrong. Unreal’s renderer is one of a very few rasterizers where rendering a tinted refraction/translucency has to be a workaround. This is another one of the common, fallacious arguments - that the tinted refraction is commonly a complicated thing to do.

Here you are also very wrong. Regular generic tinted glass, such as red glass is a SINGLE dielectric solid medium without any coat. What gives it a reflection is just polish of the surface, which makes the surface structure very fine.
BlockOpticTumbler56a016955f9b58eba4aedd16.jpg
There is absolutely nothing on a material like this which involves multiple layers.

What’s sad is that most of people here have a perspective so deformed by a decades using a hacky way rasterizers do things that they are unable to think outside of the box, and see much simpler solutions. Especially since many of these hacky, overcomplicated ways are products of technical limitations which are long gone these days.

What I want is something that IS technically possible, to be a one click solution instead of complex set of steps. So that I, as an artist, can spend my time focusing on more important stuff. This just comes down to me being able to trust Unreal’s shading model to handle all the possible real world materials. Currently it can handle all of them except tinted glass, which means it’s incomplete. All I am requesting is that it’s a complete shading model.