Any Idea how to know how demanding a meterial is

I know there is a statistic list when you create a material but still, is there a more precise way to know how demanding a material is for the game.

thaks :slight_smile:

yes, measure its cost to render by profiling it in the context in which it is used. The tools used to profile rendering performance are many and vary by platform.

There is also the shader complexity viewmode, which is a useful way to identify abnormally expensive materials and/or bad overdraw.

and you can see instruction count in material editor window :slight_smile:

I think he’s interested in knowing the actual cost of a material on his hardware. Shader complexity colors tell you nothing besides giving you just an idea of material cost. And profiling the accurate cost of select materials in an environment filled with tons of stuff, well, I don’t think profiling would help there. I’m interested in seeing accurate material costs on hardware as well.

Like with anything on the GPU, you’ll have to profile multiple times to get an average value. Material complexity will impact the Base Pass, but that depends on the amount of pixels covered on the screen by the material and/or the density of the mesh(vertex instructions). RenderDoc will expand on the Base Pass information and show you the timings in microseconds, but again you’ll have to capture multiple frames and compare.

you can`t tell how much this material would cost on hardware, cause of “depends”… screen size, lights, postprocess etc.
So only way is profile with 3rd party soft, like RenderDoc or intel GFA

Materials require calculations on two separate paths: Vertex, and Pixels. Vertex operations include things like rendering polygons and all the operations performed on vertices. These types of operations scale depending on the vertex count of the object. So a shader with a lot of heavy vertex instructions on a flat plane wouldn’t cost as much as a simple shader on an object with thousands of vertices. This is why limiting the number of vertices in your scene (via LODs) is so important. And since there are usually more pixels than vertices, moving as many operations from pixel to vertex shaders is generally good practice.

Pixel operations are typically what causes slowdowns from shaders, though, because they must be calculated and rendered to every pixel. Stuff like Parallax Occlusion Mapping, translucency, Landscape, blends, and other complex materials must perform all these operations, checks, and passes per-pixel. It’s a good idea to use the material quality switches when things get too complex. For instance, using basic parallax mapping in lieu of Parallax Occlusion on mobile devices, and removing more advanced shading on landscapes.

Things are not always so simple, though. The way Unreal’s deferred rendering works a “screenshot” of the object normals and scene depth is always performed and cannot be switched off. So, you have a tradeoff where complex materials can be rendered much easier with fewer shader instructions at the expense of a constant overhead. Also, shaders will cache certain operations and results that don’t change over time, so the actual performance on hardware is usually better than the shader complexity might have you believe. But other things, like translucency overdraw, are always an issue.

I would say unless you’re doing really crazy things with materials (like POM and translucency), don’t worry too much about it. It’s really hard to know for sure what the actual cost of a material is in the scene, so testing is the only way to know for sure. You can always replace a shader with a basic color to see how much impact the increased complexity has. But nowadays textures and per-pixel normals, fresnel/basic math operations, G-buffer calls, and basic blends all come at an insignificant cost. And even if the cost is too high, you can always drop the resolution/screen percentage a notch.