Announcement

Collapse
No announcement yet.

UE4 still does not have any proper way to do tinted glass

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

    UE4 still does not have any proper way to do tinted glass

    Hi,

    so for ages, UE4 did not have any proper way of making tinted glass. There was an ugly workaround of using SceneColor node in the emission slot, however this has issue of ignoring all the translucent materials behind it, so it's usable only in some cases. On top of that, SceneColor is ignored when RayTracing is enabled, which makes it impossible to create tinted glass with RayTracing.

    Am I missing something or:
    1. There is really no way to make tinted glass which doesn't ignore other translucent objects behind it.
    2. There is really no way to make tinted glass which works with Ray Tracing?

    What we really need is a modulate blend mode which supports Lit Surface TranslucencyVolume.

    Thanks in advance.
    Last edited by Rawalanche; 10-09-2019, 04:01 AM.
    https://www.artstation.com/artist/rawalanche

    #2
    Not sure about ray tracing, but tinted glass is one of the few things that really look best in unreal.
    Check out this stuff.
    https://www.unrealengine.com/en-US/t...-unreal-engine

    Comment


      #3
      Originally posted by MostHost LA View Post
      Not sure about ray tracing, but tinted glass is one of the few things that really look best in unreal.
      Check out this stuff.
      https://www.unrealengine.com/en-US/t...-unreal-engine
      I've seen that, they are using a ridiculous workaround of having two separate glass meshes, one for the non tinted translucent material and one behind it for tint based on modulate shader. That will work in a context of something like one-trick pony VFX shot, but it's unacceptable solution for any game stuff. You can't double the amount of your meshes and have the so close they will most likely cause Z fighting at distance just to pull off a tinted glass shader.

      You also can't have artists having to create inner modulate material mesh for all the meshes that could possibly have some transparent glass. It's just not realistic. If anything it shows what a painful route had The Mill artists go to even finish the project.

      They in fact had to use 4!!! faces in the end for each glass surface, because double sided translucent materials in UE4 aren't even capable of proper depth sorting.
      Last edited by Rawalanche; 10-09-2019, 06:09 AM.
      https://www.artstation.com/artist/rawalanche

      Comment


        #4
        It's not a work-around, it's how it works, and it absolutely works in game without any z-fighting.
        It also makes 100% sense to have different layers. it's exactly the same as you would for reflectors.

        Comment


          #5
          Originally posted by MostHost LA View Post
          It's not a work-around, it's how it works, and it absolutely works in game without any z-fighting.
          It also makes 100% sense to have different layers. it's exactly the same as you would for reflectors.
          Not at all... It just doesn't make sense. I mean really? Having inner shell for every possible small glass part of a vehicle that's supposed to have tinted glass? Workarounds are not a solution. Solution is simple, having a shading mode which can do tinted glass. There's absolutely nothing preventing that in realtime graphics.

          It's not just about having a solution, its about having a good solution. Everyone can come up with some sort of solution sooner or later, but not every solution should be implemented, if it means complicating the process.

          Sure, if you have 500 classes in your project, you could have blueprint with 500 if statements to check if it's the class you are looking for, or you could do a single cast. Both are solution to the problem, but one is acceptable while other one is not.
          Last edited by Rawalanche; 10-10-2019, 03:53 AM.
          https://www.artstation.com/artist/rawalanche

          Comment


            #6
            Originally posted by Rawalanche View Post
            I've seen that, they are using a ridiculous workaround of having two separate glass meshes, one for the non tinted translucent material and one behind it for tint based on modulate shader. That will work in a context of something like one-trick pony VFX shot, but it's unacceptable solution for any game stuff.
            So, it would certainly be cool if there was a more streamlined approach, but this seems a bit like an extreme overreaction on several dimensions.

            Originally posted by Rawalanche View Post
            You can't double the amount of your meshes and have the so close they will most likely cause Z fighting at distance just to pull off a tinted glass shader.
            There won't be Z fighting, since (as you noted below) translucent materials aren't even sorted via a Z buffer (and also there are plenty of built-in features, like PDO or the camera offset node, that can prevent Z fighting from even being a problem in the first place).

            And drawing geometry multiple times per frame is already an inherent part of the deferred renderer, so this is hardly something that will have a significant effect on performance. I can't see how having a "proper" way of doing this probably would actually make a difference.

            Originally posted by Rawalanche View Post
            You also can't have artists having to create inner modulate material mesh for all the meshes that could possibly have some transparent glass. It's just not realistic. If anything it shows what a painful route had The Mill artists go to even finish the project.
            What part of this is "unrealistic"? It's just a question of duplicating all faces with a particular material; I think almost all modeling packages can easily be scripted to do this automatically on export.
            Originally posted by Rawalanche View Post
            It's not just about having a solution, its about having a good solution.
            But is this solution really that bad? I can understand how having a dedicated shading model for this might feel "cleaner," but I'm not convinced that it would be significant in practice.

            More importantly, there are plenty of translucent effects that the engine can't do at all, so I'd be surprised if Epic wanted to prioritize something that can already be accomplished in a way that is, frankly, not that unpleasant. It's a "workaround," for sure, but then again, so is deferred rendering itself.

            Originally posted by Rawalanche View Post
            Sure, if you have 500 classes in your project, you could have blueprint with 500 if statements to check if it's the class you are looking for, or you could do a single cast. Both are solution to the problem, but one is acceptable while other one is not.
            I'm vaguely terrified to know in what situation 500 if statements could possibly do the same thing as a cast.

            Comment


              #7
              Originally posted by amoser View Post

              So, it would certainly be cool if there was a more streamlined approach, but this seems a bit like an extreme overreaction on several dimensions.



              There won't be Z fighting, since (as you noted below) translucent materials aren't even sorted via a Z buffer (and also there are plenty of built-in features, like PDO or the camera offset node, that can prevent Z fighting from even being a problem in the first place).

              And drawing geometry multiple times per frame is already an inherent part of the deferred renderer, so this is hardly something that will have a significant effect on performance. I can't see how having a "proper" way of doing this probably would actually make a difference.



              What part of this is "unrealistic"? It's just a question of duplicating all faces with a particular material; I think almost all modeling packages can easily be scripted to do this automatically on export.


              But is this solution really that bad? I can understand how having a dedicated shading model for this might feel "cleaner," but I'm not convinced that it would be significant in practice.

              More importantly, there are plenty of translucent effects that the engine can't do at all, so I'd be surprised if Epic wanted to prioritize something that can already be accomplished in a way that is, frankly, not that unpleasant. It's a "workaround," for sure, but then again, so is deferred rendering itself.



              I'm vaguely terrified to know in what situation 500 if statements could possibly do the same thing as a cast.
              Do I really, seriously need to explain why creating unique, separate complex geometry with two separate materials is a worse solution than adhering to the PBR standard where base color defines transmission color for transmissive materials?

              https://youtu.be/DO7zkJRWNgs

              I do get that not everyone needs to be effective at what they do, but I do. I can't afford to spend exceptional amounts of time to achieve mediocre things.

              Let's say I need to do following: A car has windows and tail lights. Windows need to be dark green tinted glass and tail lights need to be red tinted glass.

              Normal, acceptable workflow:
              1. Create window material by creating Lit Translucency material with translucency tinted to green
              2. Create tail light material by creating Lit Translucency material with translucency tinted to red
              3. Apply window material to car window
              4. Apply tail light material to car tail lights

              Current workflow:
              1. Get back to your DCC, assuming you are lucky enough it's your model and not a bought/received asset
              2. Spend time selecting all the window geometry
              3. Duplicate it
              4. Offset it
              5. Assign it new material ID
              6. Spend time selecting all the tail light geometry
              7. Duplicate it
              8. Offset it
              9. Spend more time dealing with the mesh self intersections induced by offseting mesh of complex curvature along the face normals
              10. Assign it a new material ID
              11. Bring the asset back
              12. Notice you are no longer able to take advantage of automatic LOD generation as decimation of 2 thinly neighbouring surface is imperfect so tint layer sometimes clips into the glass layer
              13. Create glass material by creating Lit Translucency material
              14. Create green tint material by creating green modulate material
              15. Create red tint material by creating red modulate material
              16. Assign glass material to glass pieces
              17. Assign red tint material to inner tail light layer
              18. Assign green tint material to inner window layer

              I can't comprehend what's in it for you to defend clearly inferior and ineffective workflow. Why would you advocate for not improving the engine? There is absolutely no benefit in treating one, solid glass medium as two separate objects. It has tons of downsides and corner case issues with literally 0 benefits.
              https://www.artstation.com/artist/rawalanche

              Comment


                #8
                what's wrong with doing it this way
                https://youtu.be/XRwFh6s5wqE

                Comment


                  #9
                  Originally posted by NotSoAccurateNo1 View Post
                  what's wrong with doing it this way
                  https://youtu.be/XRwFh6s5wqE
                  Because you do not want to edit opacity. In the context, opacity should be 1. But unreal ties opacity and accumulated distortion, which renders you unable to have any meaningful shading on objects with transmission.


                  In any case, what OP is asking is largely impractical, as to get it working, you would need to accumulate tint color from all refractive objects, the same way, distortion is accumulated, which is phat.
                  Gotta settle on using scene color to emissive.

                  Comment


                    #10
                    Originally posted by NotSoAccurateNo1 View Post
                    what's wrong with doing it this way
                    https://youtu.be/XRwFh6s5wqE
                    1. Does not work with ray tracing
                    2. Does ignore any translucent objects behind
                    https://www.artstation.com/artist/rawalanche

                    Comment


                      #11
                      Originally posted by Rawalanche View Post

                      Do I really, seriously need to explain why creating unique, separate complex geometry with two separate materials is a worse solution than adhering to the PBR standard where base color defines transmission color for transmissive materials?

                      https://youtu.be/DO7zkJRWNgs

                      I do get that not everyone needs to be effective at what they do, but I do. I can't afford to spend exceptional amounts of time to achieve mediocre things.
                      I don't think you understood my post. I did not that there aren't better ways it could work. In fact, I said the opposite. I did say that I feel you're exaggerating the degree of inconvenience present in the current way of doing it, and I still believe that. The current workflow is not as bad as how you present it.

                      1. Get back to your DCC, assuming you are lucky enough it's your model and not a bought/received asset
                      2. Spend time selecting all the window geometry
                      3. Duplicate it
                      4. Offset it
                      5. Assign it new material ID
                      6. Spend time selecting all the tail light geometry
                      7. Duplicate it
                      8. Offset it
                      Like I said, if the window and tail lights already have separate materials on them, this can easily be scripted in any modeling package so that it can be done with a single button click. Sure, that requires work, but if it's something you're going to be doing regularly enough that doing it by hand is genuinely unacceptable, it seems like the benefit of making a script is more than worth it.

                      9. Spend more time dealing with the mesh self intersections induced by offseting mesh of complex curvature along the face normals
                      10. Assign it a new material ID
                      11. Bring the asset back
                      12. Notice you are no longer able to take advantage of automatic LOD generation as decimation of 2 thinly neighbouring surface is imperfect so tint layer sometimes clips into the glass layer
                      Like I also said, if self-intersections between the translucent geometry causes a problem, use the camera offset node on one of the layer materials instead of actually including the offset in the geometry. Confirm that self-intersections are even a problem first, though, since I've never run into that in this situation, and I don't think you will either given how depth sorting works with translucency.

                      13. Create glass material by creating Lit Translucency material
                      14. Create green tint material by creating green modulate material
                      15. Create red tint material by creating red modulate material
                      16. Assign glass material to glass pieces
                      17. Assign red tint material to inner tail light layer
                      18. Assign green tint material to inner window layer
                      Again, this is pretty easy to script if you know this is something you'll need to do more than once.

                      I can't comprehend what's in it for you to defend clearly inferior and ineffective workflow. Why would you advocate for not improving the engine? There is absolutely no benefit in treating one, solid glass medium as two separate objects. It has tons of downsides and corner case issues with literally 0 benefits.
                      I believe I addressed this very clearly and specifically in my previous post, but I'll try one more time:

                      I'm not saying this workflow isn't inferior to one in an ideal world in which this is handled without the user expending any effort. I am saying that there are plenty of other things, even specifically related to translucency, that are significantly more challenging, or even impossible, to achieve with the current tools. "Improving the engine," unfortunately, isn't a binary proposition. Improving anything necessitates making a choice about what, specifically to improve.

                      Personally, I'd rather see attention given to things that can't be done at all than things that can already be done, even if the way to do them is fairly awkward. I'm not even trying to give an opinion or make a value judgement here, though. Like I said, I'd simply be surprised if Epic decided to prioritize this particular issue over any others.

                      Comment


                        #12
                        Originally posted by amoser View Post
                        Like I said, I'd simply be surprised if Epic decided to prioritize this particular issue over any others.
                        I would not. Since the usefulness of ray tracing currently lies mainly in the visualization market, to which Epic is trying to rapidly expand, given the recent developments as well as some items on the 4.24 roadmap, sooner or later more and more people will get bitter about the inability to raytrace any kind of colored glass, especially in the visualization market.

                        You also keep mentioning script to automatize those tasks. Despite the fact it will still fail in many cases (fluid simulation of pouring wine for example), it's still a bad solution. It adds tons of workflow overhead which needs to be constantly managed and manually reviewed to cover corner cases. It's just not possible to employ overcomplicated and fragile solutions in production environments.

                        I mean look at how they were using The Mill human race car demo to sell all the great aspects of the realtime rendering to the public. How everything is suddenly interactive, and realtime, and cool, yet The Mill continues using offline rendering for vast majority of their jobs. It's the overcomplicated, convoluted solutions this tinted glass thing is a prime example of, which are the reason it just doesn't pay off, despite all the realtime benefits. The artists' work time overhead on engineering and performing tons of unnecessary workarounds is way more expensive than bunch of machines just crunching frames using offline renderers.
                        Last edited by Rawalanche; 10-12-2019, 04:31 AM.
                        https://www.artstation.com/artist/rawalanche

                        Comment


                          #13
                          I am also the opinion that tinted glass is quite complicated to setup right now, plus that the method you can use are either not working with Emissive behind it or not working with Raytracing. Also you need to do some hand work of duplicating Geometry, explained in the HumanRace paper.
                          There are only workarounds for tinted glass right now, not a real solution.

                          Comment


                            #14
                            Originally posted by Rawalanche View Post
                            I would not. Since the usefulness of ray tracing currently lies mainly in the visualization market, to which Epic is trying to rapidly expand, given the recent developments as well as some items on the 4.24 roadmap, sooner or later more and more people will get bitter about the inability to raytrace any kind of colored glass, especially in the visualization market.
                            I'm not sure we're even really disagreeing about most of this, but it remains my opinion that for translucency in general, including the specific case of raytracing glass, there are other issues that are more significant than this one. I think that those issues should, and probably will, be addressed first.

                            Originally posted by Rawalanche View Post
                            You also keep mentioning script to automatize those tasks. Despite the fact it will still fail in many cases (fluid simulation of pouring wine for example) (...)
                            Why would it fail in those cases? I don't see why the approach of duplicating geometry would not be applicable either to a baked geometry cache or a live simulation.

                            Originally posted by Rawalanche View Post
                            (...) it's still a bad solution. It adds tons of workflow overhead which needs to be constantly managed and manually reviewed to cover corner cases. It's just not possible to employ overcomplicated and fragile solutions in production environments.
                            Keeping in mind the fact that all production environments are not necessarily created equal, I personally do not find the thought of using some custom logic to prepare data for migration from one piece of software to another to be particularly alarming. In fact, in my own experience, it's nearly ubiquitous. Again, note that I'm not saying that it's desirable, just that it's not only possible, but often necessary in practice.

                            Originally posted by Rawalanche View Post
                            I mean look at how they were using The Mill human race car demo to sell all the great aspects of the realtime rendering to the public. How everything is suddenly interactive, and realtime, and cool, yet The Mill continues using offline rendering for vast majority of their jobs. It's the overcomplicated, convoluted solutions this tinted glass thing is a prime example of, which are the reason it just doesn't pay off, despite all the realtime benefits. The artists' work time overhead on engineering and performing tons of unnecessary workarounds is way more expensive than bunch of machines just crunching frames using offline renderers.
                            I'm just not sure I agree that this tinted glass thing is a prime example of why real time renderers haven't displaced offline renderers. I think there are plenty of other more pressing limitations involved, to say nothing of the fact that studios tend to have a lot of custom infrastructure built around specific offline renderers that would need to be mostly thrown out and re-created in order to support using Unreal instead. There are also plenty of non-technical challenges, such as training and familiarity, that make it challenging to completely replace a widely-used offline renderer with a real time renderer that most people aren't yet accustomed to.

                            We're saying lots of words at each other, but I'm not sure we're actually communicating about anything. I'm sure Epic's developers will be able to formulate an opinion on which limitations, bugs, and missing features should be addressed first without either of our input.

                            Comment


                              #15
                              Originally posted by amoser View Post
                              Why would it fail in those cases? I don't see why the approach of duplicating geometry would not be applicable either to a baked geometry cache or a live simulation.
                              The rudimentary script which would be created to create tint material geometry for static meshes would hardly work for simulated fluid caches. Anyone with any DCC familiarity would know that. So it would require some elaborate monstrosity which would cover also a case of animated mesh with dynamic topology. Those are usually heavy so it's also hardly something you want to be playing back and keeping in memory twice.

                              I think the main disagreement we have is that you underestimate the importance of having a complete basic PBR shading model. Sure there are many more issues that need to be tackled, but it's really hard to even build on a broken base. In production environments, it's just very difficult to employ any kind of shading model which does not cover even as common use case as a tinted translucency/refraction.

                              Originally posted by amoser View Post
                              I'm just not sure I agree that this tinted glass thing is a prime example of why real time renderers haven't displaced offline renderers. I think there are plenty of other more pressing limitations involved, to say nothing of the fact that studios tend to have a lot of custom infrastructure built around specific offline renderers that would need to be mostly thrown out and re-created in order to support using Unreal instead. There are also plenty of non-technical challenges, such as training and familiarity, that make it challenging to completely replace a widely-used offline renderer with a real time renderer that most people aren't yet accustomed to.
                              I've been working as offline 3D generalist for about 11 years now, so I'd say I have a good basis for comparison, and quiet a few of my colleagues have tinkered with realtime workflows too, but the consensus is pretty much the same - the main issue being simple stuff requiring ridiculous, time expensive workarounds to achieve. I've eventually managed to bite the bullet, and transitioned career into something you could call UE4 technical artist, but most of my colleagues just were not wanting to put up with that. Even I, myself am still spending way more time in UE to ultimately achieve inferior quality, but hey, it pays better
                              Last edited by Rawalanche; 10-14-2019, 02:00 AM.
                              https://www.artstation.com/artist/rawalanche

                              Comment

                              Working...
                              X