If you know, that it is simple, how about you offer a solution or propose an approach instead?
I already did, few posts above. All I’d want would be modulate blend mode which is compatible with Surface TranslucencyVolume model.
Even if it comes with some limitations, such as inability to stack modulate materials, it would still cover most of the use cases.
How exactly modulate blend would help you here? Modulate blend multiplies source by destination.
I believe what was meant by provide a solution is “write shader code to do this”. If you can not, you have absolutely 0 right to say anything is “simple”, otherwise you should just go do it.
Oh and regarding the glass, that’s hardly a tinted car window.
it works slightly differently with blown glass, but the idea is the same. You pigment silica with different colors/dies. Just because melting it makes it appear as one surface it does not mean you have just one surface.
and you may or may not use coatings on blown glass too. It depends on what kind of finish gets put on it, so you should probably refrain from generalizing it in the first place.
This doesn’t make any sense. Even in high end automotive visualization (which I’ve done plenty off) in offline renderers, people do not simulate car windows tint by actually having multiple geometry layers. A simple dielectric material with tinted refraction in vast majority of cases looks identical, and even clients of the large automotive brands find it sufficient. If such an advanced simulation is not required even in high end offline rendered visualization, what would justify it a realtime game engine? The process is a workaround of a limitation, don’t try to turn it into a feature.
This is the common form of the “You must be an experienced chef to be able to tell if the food tastes good” kind of flawed argument. I can’t write a shader code for this, I probably could after a few months if I looked into it. But I can provide number of examples of realtime rasterizers which allow you to create tinted glass materials without workarounds, as ability to tint translucency/refraction is part of their basic shader model. That alone shows it is possible.
Imagine you live in a state where 4 mobile carries operate. 3 of them give you 50GB of data for $20/month while 4th one, the one you are with gives you 0.5GB for the same price. You clearly see, based on the other 3 that possibilities are somewhere else by an order of magnitude at least. When you complain about this to your carrier, asking why their price to service value ratio is so poor, the answer you will get is that you have absolutely 0 right to criticize the value they offer before you start and run a company which becomes a major mobile carrier.
In that particular constellation of the parameters, only emissive component would be multiplied, while others, such as specular, would be overlayed on top.
As far as I can tell, what he’s actually looking for is a shading model that takes in a “tint” color which is multiplied by whatever is behind the translucent surface (the scene color wouldn’t be sufficient because he needs to support overlapping objects as well), after which the (potentially untinted) surface would be blended on top, similarly to how the existing “clear coat” shading works.
Of course, implementing that in the deferred renderer directly (and in a way that allows for overlapping tinted geometry) would require having two gbuffers instead of just one (one for modulating the scene color, and one for compositing the surface shading on top) and the only way to do that would be to render all of the geometry an additional time.
I’m genuinely not sure what this means, because what you seem to be saying isn’t even slightly true. There are infinitely many (literally) BSDF functions that appear in the real world that aren’t supported. Even incredibly common things like anisotropic highlights and Oren-Nayar diffuse reflectance, and the vast majority of cloth materials can’t be achieved at all (Oren-Nayar shading is technically in the engine, but currently requires modifying shaders to make it available. Even then, it only works correctly with direct light).
Anisotropy is supported since 4.24, check trello board. UE4 has cloth shading model as well as fuzzy shading node. Sure it’s not a physically based simulation of cloth backscattering, but it’s more than sufficient in context of realtime engine. I mean if you see all the archviz work that’s been already done with the engine, cloth shading is definitely not an issue. What is though, is the tinted translucency/refraction workflow. Even if we accept the workarounds mentioned above, we still don’t have raytracing compatibility. Complete, in a context of what I said, was supposed to mean just ability to create majority of common materials out of the box. Tinted refractive materials fall into the category of “common” since it’s hard to shade even such a basic things like vehicles or interior accessories without them.
One more thing:
It would not need to allow for overlapping tinted translucent geometry, at least perhaps not the initial implementation. There are many cases where you need tinted refractive material, but the cases where you need two tinted refractive materials overlapping would be rather rare. It would still be sufficient to at least cover 90%+ of the cases than none at all. It’s not like Unreal doesn’t have other limitations when it comes to shading.
This is not a modulate blend. Again, modulate blend takes color, and multiplies it over existing color. There is no specular or emissive. By the time it gets blended, it is just a color. By saying that giving you modulate blend on lit translucency would magically fix tinted glass for you, you are further confusing anyone, who might peek into the thread.
There are two practical ways to tackle the problem.
Use Blend Add(Alpha composite) blending mode, and divide specular lighting by opacity. It works out of the box, needing only 3 lines of shader code to change. Breaks at low opacity values and instead of correct tint, gives you mix of background color and transmission color. Better than nothing, but still pretty terrible.
Use Dual blend. That is what you are expecting, Specular lighting would be added and transmission(emissive, basecolor, w.e.) would be multiplied but there are performance and platform implications.
In both cases, distortion would still leave any objects with transmittance behind first one, undistorted.
So the essence of two pages here:
UE4 needs dual blend blending mode support added for lit and unlit translucency and separate translucency.
Until it is done, transmissive shading in UE4 will remain being triumph of raindance and workarounds.
TLDR: Use Scene Color Material Expression.
Really? I don’t see it on the “done for 4.24 list” but maybe you’re referring to something else I’m missing. Would be really neat if true, especially if SSR takes it into account as well.
Having a single diffuse cloth model with two extra parameters is by no means more than sufficient in lots of real-world cases. It doesn’t have any relevance at all to things like silk, velvet, etc.
More to the point, though, what you said (and what I was responding to) is the claim that you need to be “able to trust Unreal’s shading model to handle all the possible real world materials” and following that up by saying that somehow tinted glass is, somehow, the *only *missing piece. That seems wildly different from *any *of the things you’re saying now. I don’t know if you’re just being needlessly hyperbolic or if you’re just not really paying attention to what you’re actually writing, but it’s difficult to follow the logic and that’s been the case throughout most of what you’ve written in this thread. I think if you take a bit more time to proofread your posts you’ll probably be able to get your ideas across much more efficiently.
I suspect that you’ll also find that people are more receptive to your ideas in general if you show that you’ve put a little bit more thought into them. A lot of what you’ve written in this thread comes across as just a bunch of disjointed sweeping generalizations and insults, with a few specific but also frequently incorrect details thrown in. I think most people just want to help out, but that’s challenging when one has to jump through a bunch of hoops just to figure out what you’re actually asking about.
Support for overlapping geometry is the sole reason you gave in your first post or why using the scene color input node is not a good solution, though. And doing it that way is both more efficient and more flexible than an additional shading model would be.
This is seriously weird. I swear I saw it there, including screenshots. I may be wrong but I am 95% sure it was there. Maybe it was taken down for some reasons. There were even screenshots of UE4 shaderballs with aniso effect.
Did you actually read my post? I precisely explained what I meant by complete shading model. Complete in terms of being able to cover all !**basic! **materials, not complete in terms of being able to simulate them to a degree offline renderers do. Here you are accusing me of not proof reading my posts and not making any sense while you clearly ignored a response to what you are currently calling out (again). So again, just so that you do not miss it. By a complete shading model I mean ability to at least to some degree create all the common real world materials - NOT ability to simulate them on the highest level possible (which would cover specific BRDFs for things like velvet). Basically something like VrayMTL, CoronaMTL or Blender’s PrincipledBSDF, while they don’t cover all the niche BRDFs/BSDFs, they do cover vast majority of basic real world materials.
There’s a difference. Using SceneColor ignores any translucent materials after first surface hit and does not work with ray tracing. Again, if you read my post precisely, I propose NOT to ignore following translucent materials, but just ignore stacking of the modulate component of the material (several color multiplications along the depth). Or in other words, you’d still see surface properties of the materials behind, just not correctly stacked volumetric properties (modulated color). Yes, it would still be problematic in some cases, but it would solve majority of them).
Alright, since it’s getting really tiresome repeating my point, here is visual example:
The use cases are very simple. For example car tail lights made of red glass which covers some glass elements inside, such as light bulbs. Or for example a fire truck light casing, which is red and blue and has glass bulb inside. Or just a glass balcony railing which has brown tint, behind which is a glass balcony door. These are all very common use cases which require tinted glass through which you need to see other glass:
A. Regular Surface TranslucencyVolume glass without any tint
B. SceneColor based tinted glass
C. Double surface glass composed from outer shell Surface TranslucencyVolume material and inner shell modulate material
A: Does work with ray tracing, does display translucent materials behind, requires no workarounds
B: Does not work with ray tracing, does not display translucent materials being even in rasterized mode
C: Dos not work with ray tracing, does work with rasterizer but requires workaround on geometry level. Serves as a proof the effect **can **be achieved with rasterizers.
As you can see, no solution here is really a win scenario. The reason I made this thread was hoping that this can be done simpler.
Ray Traced Translucency
Ray Traced Translucency (RTT) accurately represents glass and liquid materials with physically correct reflections, absorption, and refraction on transparent surfaces.
For additional information, see Ray Tracing Settings.
So a according to you, the example in the documentation with a translucent door that also has a dark tint is Impossible.
Hum… I wonder how the Unreal team went and did that… they may just be magic.
Care to show me the example you have in mind? Dark tint is easily possible, obviously, since all that’s required is reduction of the opacity parameter. I am talking about colored tint. With saturated color such as green or red.
I’ve found identical formulation to one you’ve posted on this page: https://docs.unrealengine.com/en-US/Engine/Rendering/RayTracing/index.html But I am failing to see any example of colored glass.
Yes, and I quoted it verbatim.
And I don’t doubt that you *meant *something closer to the following
None of that was actually present in your post at all, which is why I used it as an example of how you could add a bit of clarity to make sure people understand your intentions.
Again, this is what you wrote (emphasis mine):
You’re saying that you actually meant something a bit different, which is perfectly fine. As I noted from the beginning, I wasn’t certain what you actually meant.
But I still believe it serves as a good example of why it could be beneficial to spend a bit more time clarifying what you’re asking in the first place, to make sure that what you write actually reflects how you want people to interpret it. If nothing else, it’ll at least lessen the sense that you’re deliberately antagonizing everyone who tries to actually answer your questions. I imagine most people don’t like to take the time to respond to something, only to be insulted for not responding to something else different that wasn’t actually stated or asked.
It seems you are far more concerned with the semantics of the posts in this thread rather than the substance of it. I am interested in the actual opposite. A few posts above, I’ve posted a video with the practical example of limitations of all the aforementioned approaches. While I may have failed to be precise in my formulations, I highly doubt most people would be unable to understand the general point I was trying to make.
Your posts seem to be intended to do anything except actually breaking down or solving the issue. I am far more interested in practical solutions to the issues illustrated by the video.
Nah, I was just trying to explain why I think that what you are asking for (or what I believed you were asking for) isn’t quite as easy to implement or even necessarily as useful as you thought, and that you’re likely to be disappointed if you expect it to be implemented soon as a native engine feature. I also explained in some detail how the other approaches you dismissed might actually be able to accomplish quite a few very specific things that you asked for and claimed were impossible.
Past that, maybe you’re right, and most people other than me do understand what you’re saying without difficulty, and I’m wrong for thinking you could have stated the problem more clearly. And maybe what you’re asking for actually is something really easy to accomplish. At this point I’m clearly unable to find an adequate solution, whether that’s because I’m misunderstanding what you’re asking for, or for some other reason. It seems like I may not be alone, simply given that other people in the thread seem to be having similar difficulties, but I’d really be thrilled if someone comes along and implements a new shading model that solves all of your problems.
From what I understand, you want tinted glass to be rendered in a single pass, without resorting to using sceneTexture. To do tinted glass, you need to modulate the pixels behind the glass by a RGB value and add the reflected light/environment from the glass surface. There is no GPU blending mode that can do color modulation and color addition at the same time, therefore your request is impossible at the hardware level(*): you need to do it in two blend passes (first modulate, then additive).
(*) There are ways to do it on certain hardware: raytracing (RTX only) and tile-based mobile GPUs actually allow in-shader blending. There’s also an optional feature on some shader model 5.1 GPUs that allows in-shader blending, at a cost.
So? I don’t really care/mind about how many blend passes it takes. I mean for example Unreal already does one draw call per every material slot on a mesh. That’s like saying it’s impossible to have meshes with multiple materials on GPU because you can’t do multiple materials in a single draw call. It may not need to be so efficient that it happens on the GPU shader level under the hood, that’s not the priority here. Priority is just acceptable workflow from user experience standpoint.