Announcement

Collapse
No announcement yet.

Dynamic Decal Blending + Mesh Integration Tools

Collapse
X
 
  • Filter
  • Time
  • Show
Clear All
new posts

  • replied
    Originally posted by Chosker View Post
    we're well aware of the advantages of using dynamic shader branching
    how about going to the thread where we request proper support (i.e. via nodes) instead of derailing this thread?
    Yeah it's getting a little derailed, but we were talking about optimization because Deathrey started talking about sampling when you have a ton of layers. Regardless, this dynamic branching topic is VERY much a part of making something like these CE decals come to life.

    Originally posted by Zeblote View Post
    So the dynamic branch node you can place in the material editor doesn't work? The more you know...
    The built in one doesn't work. It would just be using the same method as flattening; like the IF node or the lerp node when they are at zero or one. The custom node method is the only real way to make it work right now. Speaking of the lerp node, I need to make a dynamic node version of it so that it branches at zero and one.

    Leave a comment:


  • replied
    So the dynamic branch node you can place in the material editor doesn't work? The more you know...

    Leave a comment:


  • replied
    we're well aware of the advantages of using dynamic shader branching
    how about going to the thread where we request proper support (i.e. via nodes) instead of derailing this thread?

    Leave a comment:


  • replied
    Originally posted by cyaoeu View Post
    Shouldn't the single color branch be way faster than 67ms if it's dynamic branching?
    1080p, epic settings, dynamic lighting, bunch of other programs open and a toaster of a video card... Blank scene, with those settings, I will get like 10fps lol.

    EDIT: Tested it out, yep, 10fps with just the character, the dynamic lighting and those settings. Maybe I need to restart the editor because that still doesn't seem right.
    Click image for larger version

Name:	lolfps.jpg
Views:	1
Size:	7.7 KB
ID:	1131861
    Last edited by IronicParadox; 07-31-2017, 04:11 PM.

    Leave a comment:


  • replied
    Originally posted by IronicParadox View Post
    When it's on: ~90ms
    When it's off: ~67ms
    So it's definitely working!
    Shouldn't the single color branch be way faster than 67ms if it's dynamic branching?

    Leave a comment:


  • replied
    Originally posted by Chosker View Post
    "optimize your materials". sure.
    Originally posted by Maximum-Dev View Post
    It comes down to everyone not knowing how to optimize their materials, every time.
    Update: Yes, you can put in some dynamic branching. I had some time to mess around and quickly made a simple IF switch custom node. I put it through the ringer and it works great! To benchmark it, I just set up an overkill lerpfest between four 8k textures for the complex part and just a plain color for the simple part. Within the level BP, I made a simple loop that alternates between the two states every 10 seconds (to ensure the graph settles).

    One caveat that I've found is that if you have any sort of animating going on behind the "deactivated branch," like a panner, it will keep the textures "hot" in the cycles and they will still contribute to frame time. Though I did only put one of these custom functions in and it was right before the material attributes. It's a pretty cheap little function, so it probably wouldn't hurt too badly to throw more of them into the mix; like before things like panners if needed.

    Code:
    [branch] if ( A >= B)
    {
    return ThroughA;
    }
    else
    {
    return ThroughB;
    }
    When it's on: ~90ms
    When it's off: ~67ms
    So it's definitely working!

    Click image for larger version

Name:	works.jpg
Views:	1
Size:	229.5 KB
ID:	1131859

    Leave a comment:


  • replied
    Oh wow, I assumed that the engine already had dynamic branching implemented in it but I tested it pretty thoroughly and it definitely doesn't... So far, I've made all of my materials assuming it was in the engine and never really bothered to actually benchmark test it to see. I mean it's only been an HLSL feature since what, SM3? It kind of makes sense to code an engine so that if a lerp is a 0/1 or an IF node is an A/B, that it would skip the rest of the stuff behind the branch that wasn't needed. Shame on me for expecting that out of a AAA engine in 2017 lol...

    That being said, does branching in a custom node work at least?

    [branch]
    if(some condition is true)
    {
    result=do branch A;
    }
    else
    {
    result=do branch B;
    }
    return result;
    On the plus side, at least all of my materials are ready to go for when it does get implemented. I might go back and replace my IF nodes with a custom node, if it actually supports the branching.


    Originally posted by Chosker View Post
    dithering is not my friend. for quick LOD transitions it's good but for permanent effects up-close it's ugly. the only way to mitigate its uglyness is to use TemporalAA, which in UE4 gives you horrible smearing of moving objects (or the entire screen if the camera moves) and blurs the entire scene as well.
    And about the dithering, at a distance, it's not really a big deal. If dynamic branching really worked, it would save a TON of performance to dither blend two distant materials together, rather than do an actual lerp between them.

    Left: regular blending, Right: dithered blending
    Click image for larger version

Name:	LeftRegularRightDither.jpg
Views:	1
Size:	170.5 KB
ID:	1131847

    Hybrid blend where it will do a normal blend <600 units and a dithered blend beyond that:
    Click image for larger version

Name:	hybrid600units.jpg
Views:	1
Size:	152.1 KB
ID:	1131848

    Leave a comment:


  • replied
    Originally posted by Chosker View Post
    dithering is not my friend. for quick LOD transitions it's good but for permanent effects up-close it's ugly. the only way to mitigate its uglyness is to use TemporalAA, which in UE4 gives you horrible smearing of moving objects (or the entire screen if the camera moves) and blurs the entire scene as well.


    no. there isn't an awesome node that allows you to optimize materials on the fly based on an evaluated condition.
    you obviously lack this very simple shader knowledge so I'll make it clear for you: Lerp samples both inputs first and then blends between them. all the time. for all pixels. always. lerp doesn't do dynamic branching so when you "compare values and determine what to do" and "fade things over a distance", both versions that go into the lerp are contributing to the shader complexity, always.
    fading things over a distance is used for cosmetic purposes (fading normalmaps into a flat normal, fading textures to a less tiled version to avoid repetition), but whether you know it or not, it causes extra overhead which I'm sure you'll agree means it's the opposite of optimization. pretending it optimizes your material is simply lying to yourself out of ignorance.

    "optimize your materials". sure.
    It comes down to everyone not knowing how to optimize their materials, every time.
    Last edited by Maximum-Dev; 07-30-2017, 05:13 PM.

    Leave a comment:


  • replied
    Originally posted by IronicParadox View Post
    Dithering is your friend. If there is a dithered mask,
    dithering is not my friend. for quick LOD transitions it's good but for permanent effects up-close it's ugly. the only way to mitigate its uglyness is to use TemporalAA, which in UE4 gives you horrible smearing of moving objects (or the entire screen if the camera moves) and blurs the entire scene as well.

    Originally posted by IronicParadox View Post
    There is an awesome node called pixel depth. You can compare values and determine what you want to do from there. It's commonly used to fade things out like normal maps to a flat normal at a distance. This is one big way to help optimize materials.
    https://docs.unrealengine.com/latest...ference/Depth/
    no. there isn't an awesome node that allows you to optimize materials on the fly based on an evaluated condition.
    you obviously lack this very simple shader knowledge so I'll make it clear for you: Lerp samples both inputs first and then blends between them. all the time. for all pixels. always. lerp doesn't do dynamic branching so when you "compare values and determine what to do" and "fade things over a distance", both versions that go into the lerp are contributing to the shader complexity, always.
    fading things over a distance is used for cosmetic purposes (fading normalmaps into a flat normal, fading textures to a less tiled version to avoid repetition), but whether you know it or not, it causes extra overhead which I'm sure you'll agree means it's the opposite of optimization. pretending it optimizes your material is simply lying to yourself out of ignorance.

    "optimize your materials". sure.

    Leave a comment:


  • replied
    Originally posted by Chosker View Post
    your skirt analogy doesn't hold up at all. we're trying to create a skirt on top of the body that doesn't also need to make all the fancy features of the body behind it.
    Alright, table cloth ghost costume then... Unless you're using transparency, on the cut up table cloth, it shouldn't have to sample the pixels of the "rock" underneath it. Dithering is your friend. If there is a dithered mask, over top of the rock, then it should only have to pull information for the rock material on the pixels that the cut up table cloth ISN'T occurring on. Now if there is transparency going on, then yeah, it would have to sample the rock and do a blend. That's why you'd definitely want to avoid transparency like the plague; on an effect such as this.

    how do you put a range on something "within the material" if dynamic shader branching isn't even there?
    There is an awesome node called pixel depth. You can compare values and determine what you want to do from there. It's commonly used to fade things out like normal maps to a flat normal at a distance. This is one big way to help optimize materials.
    https://docs.unrealengine.com/latest...ference/Depth/

    also the point of using deferred decals is to greatly reduce the cost of such icing on the cake. they are deferred, they have access to all the buffers that are already written in that frame. you know what that means right? you sample what's already there instead of having to duplicate the same functionality in a different material
    Oh I'm aware of how they work and the beauty of them them(still waiting on deferred spline decals). Trying to add in a feature, like this CE decal system, requires a lot of work though. UE4 has a pretty simple version of deferred decals and doesn't allow for WPO/displacement because of the stage in which they are rendered. After digging around, I found a quote from Andrew Hurley, on an answerhub post asking about WPO/displacement on decals, saying:

    "World Position Offset and Displacement both rely on manipulating the vertices of the mesh utilizing that specific material. Despite how it may work in other engines, Unreal handles decals differently by writing to the Gbuffer and storing those properties to be manipulated by the decals themselves. It does not manipulate the underlying vertices of the mesh it is applied. This would more than likely require a refactor of how we handle decals in the engine, which might also make them more costly to use."

    Leave a comment:


  • replied
    your skirt analogy doesn't hold up at all. we're trying to create a skirt on top of the body that doesn't also need to make all the fancy features of the body behind it.

    how do you put a range on something "within the material" if dynamic shader branching isn't even there?

    also the point of using deferred decals is to greatly reduce the cost of such icing on the cake. they are deferred, they have access to all the buffers that are already written in that frame. you know what that means right? you sample what's already there instead of having to duplicate the same functionality in a different material

    Leave a comment:


  • replied
    Originally posted by Chosker View Post
    an extra model would just take over, so at any given pixel you either render one material or the other. a more expensive pixel shader means you're rendering all pixels more expensively.
    I didn't mean literally... I mean't figuratively. You're trying to create something, similar to a skirt model, out of a shader; in order to bypass having to model and place the skirt.

    start doing more complex things like blending the same texture on your mesh as from the landscape below (which btw might be blending several layers at the same point) and adding more fancy effects and things get out of hand quickly.
    there's a reason the engine provides a material complexity view. you mock me when I mention shader complexity but then you yourself suggest to "optimize your materials" ?
    You're asking for a next generation level "icing on the cake" type feature and you're not willing to pay the cost for it. Make up your mind. As for optimizing materials, yes, you should always optimize them but you need to make sure you optimize them properly. You don't need mesh-terrain blending on models that are 3000 units away and therefore you'd put a range on it; within the material.

    sampling the distance field is not prohibitively expensive, it's just a really big chunk of a feature (that stacks on top of others) that's really hard to justify when you get the point where you need to sacrifice features for the sake of performance.
    just because it can already be done in a certain way doesn't mean it's the only acceptable way to do it. what's being suggested here is to do things in a different (more powerful and efficient) way, taking advantage of the deferred renderer. unless you like to settle for less, there's nothing wrong with suggesting such things. especially if it's something that can be done like... *Gasp* other engines do!
    Like I've said many times, different engines are designed differently. Picture an inverted pyramid. Now picture having to make all sorts of changes toward the bottom point. That's what would need to be done in order to implement a lot of these features(better landscape layering, "3d" decals, etc) in a more performant way. It would likely require them rewriting half of the rendering pipeline and that's probably not going to happen any time soon.

    Originally posted by Maximum-Dev View Post
    Back on topic, this feature is currently not supported in UE4 and there's nothing you can do about it.
    We have been on topic but you're just cherry picking what you want to see.

    It requires lots of hard work that has to be done by the engine development team. After Frostbite 3, Cryengine is the second engine to have this feature built in. But if I've learned one thing from comparing other engines with UE4 it's that you shouldn't expect to have the cool stuff, also at the same low cost in UE4 like other ones. We're still struggling with performance issues with almost anything related to landscapes (tessellation, performance per layer, grass tool, etc.) it'd be very unrealistic to ask for new landscape specific features while the basics are almost broken and not production ready.
    Absolutely. The FB3 engine, if it were licensed, would probably cost studios ~5 million; at the least. It's a super AAA engine that's pretty much only used on games that are raking in tens of millions, if not hundreds of millions. Cryengine, on the other hand, has always been a leader and "trendsetter" in graphical fidelity. A platform only has so many cpu/gpu FLOPS to work with though, so where they are strong in some areas, they will be weak in others and it definitely shows in some areas. I'd rather have an engine that is a jack of all trades, master of none, than an engine that is strong in several areas and weak in many. Which is exactly why I've chose UE4 over other engines.
    Last edited by IronicParadox; 07-29-2017, 08:45 AM.

    Leave a comment:


  • replied
    Originally posted by Frenetic Pony View Post
    It is, a bunch of people have been fooling around with it for over a year now after DICE showed off the same thing for Star Wars Battlefront. So far nothing's come of it, and all attempts have ballooned into far too much of a performance hit even if the visual results are nice.
    What people are doing isn't quite what dice did.
    People are using distance fields to blend between 2 materials (the second material on the mesh being the same material on the landscape). But what dice did is that their meshes blend into every/any layer of the landscape they're placed on. The landscape layer underneath the mesh is sampled and bleeds into the mesh, which is not something anyone has been able to do in UE4.

    Leave a comment:


  • replied
    Originally posted by IronicParadox View Post
    It just looks like a glorified and tooled version of triplanar mapping. Do you have any performance stats on it on/off? I'm pretty sure it wouldn't be too hard to make a mockup in UE4 and compare the two.
    It is, a bunch of people have been fooling around with it for over a year now after DICE showed off the same thing for Star Wars Battlefront. So far nothing's come of it, and all attempts have ballooned into far too much of a performance hit even if the visual results are nice.

    Leave a comment:


  • replied
    [MENTION=146056]Deathrey[/MENTION] [MENTION=540]Chosker[/MENTION], Just don't read those posts and don't reply. You already know what happens in the end.


    Back on topic, this feature is currently not supported in UE4 and there's nothing you can do about it. It requires lots of hard work that has to be done by the engine development team. After Frostbite 3, Cryengine is the second engine to have this feature built in. But if I've learned one thing from comparing other engines with UE4 it's that you shouldn't expect to have the cool stuff, also at the same low cost in UE4 like other ones. We're still struggling with performance issues with almost anything related to landscapes (tessellation, performance per layer, grass tool, etc.) it'd be very unrealistic to ask for new landscape specific features while the basics are almost broken and not production ready.
    Last edited by Maximum-Dev; 07-28-2017, 07:58 AM.

    Leave a comment:

Working...
X