Using Material Layering in 4.19 (Experimental Feature)

We’re excited to announce that we’ve added support for Material Layers in Unreal Engine 4.19 as an experimental feature. Below is some basic documentation to get you started using the feature. Also, if you have any feedback regarding the use of Material Layering, please feel free to [post in feedback thread on our forums][1] or leave feedback and post questions here.

Thank you!

To enable this feature, navigate to the Editor Preferences > General > Experimental > Materials and set Material Layering Enabled to true.

Material Layering is a new way to combine materials in a stack, which builds out the correct material graph without needing to build the node graph by hand. There are two new asset types that we use to do this:

  • Material Layer
  • Material Layer Blends

Functionally, these behave similarly to Material Functions. These new asset types also enable you to create child instances, which you could not do with Material Functions.

Material Layer assets have a default input node which pipes base Material Attributes in from the Material. Material Layer Blend assets have two default input nodes which enable you to access the Material Attributes from layers above and below.

Once you have created a Material Layer and Material Layer Blend asset, you can combine them using a Material Attributes Layers node in a Material. With the node selected, you can add layers and set the assets that each layer and blend should from the node Details panel.

When editing a Material Instance with a parent Material that contains a Material Attribute Layers node, the Material Instance Editor also contains a Layer Parameters tab. Here, you can change the assets used for any of the existing Layers or Blends that have been set. Use the Add (+) button to add additional Layers to the stack as needed. You can also see the parameters contained in each of the Material Layer and Material Layer Blends and override their values individually by entering a new one.


Passing Parameters to Layers


There are three main methods at present to do this, most of which apply to Blends and Layers.

  1. Create a parameter within the layer which behaves similarly to existing Materials and Material Functions. Parameters added within the layer graphs will be unique to that layer and editable within the layer’s section. Even if multiple copies of the same layer are added, each will have its own copy of that parameter to control.
  2. Use the Input pin to the Material Attribute Layers stack:
    This takes another Material Attributes as input which will be piped into every layer added. For example, we could pass a base normal map for the mesh as an input, like so:

Then within our example Material Layer, we get the Input and blend in:

Each layer can optionally use or ignore the base stack material attributes input. Currently, this method is not accessible within a blend graph.
3. Using Shared Input Connections (New Feature):
These function very similarly to Material Parameter Collections though rather than setting the data in a Global Blueprint, they are used for connecting data between Material Graphs. You will create the asset using the same method by making the asset and filling in the list of available parameters, their names, and types:


The parameters listed will be available for use within your Material Graphs. There are two parts to using the system. The first is setting the shared inputs, which should generally be done at the top-level Material Graph. Continuing with the example collection, the inputs are hooked up to parameters in this case but could be any material graph nodes:

As the parameters here are connected from the main Material Graph, they will be listed in the Material’s Global Parameters list when editing a Material Instance. In this respect, the parameters will be shared between any other graphs that read from them.

The next step is to get the shared input parameters, which can be done within a Material Graph, Layer Graph, or others. All instances of a Get Shared Input node will read the same shader input data, in this case, our global parameters. The following reads the shared data within a layer graph on the stack:

This enables a system where parameters and graph inputs can be shared between layers, however, when editing an isolated instance, we don’t have access to the Material where the shared input is set. This behaves similarly to Material Function inputs and a preview value can be provided in the same way:

The preview values will be used based on the parameter type that is set in the collection, so only the relevant data needs to be filled in. The preview value will be used when generating the layer preview, asset thumbnail, and any other case where the layer must be compiled in isolation.

Of note, the Get Shared Input node in the graph above lists as a Texture2D input. This type is set if the parameter is found but will list an unknown type error if the parameter no longer exists. There is no system in place at this time to automatically rename parameters for you. In a future release, this may be implemented or the Name entry box replaced with a drop-down list of available parameters.

Hi Everyone,

Please feel free to share your feedback by adding a new comment to this thread.


Awesome feature guys, thanks for the quick guide! I have just one small question - are the Shared Input Connection assets really global, like the MPC we’ve had so far? Meaning can multiple independent materials read from the same SIC that is being written to by one of them? If yes, what happens if multiple materials try writing into the same SIC?

Thanks again!

Nice feature! If I had a creature, I could see using this method to show it’s frozen or on fire.

Are we able to dynamically add and remove layers? If so, does the shader complexity dynamically change when adding and removing layers or is this purely a convenience over adding our effects at the end of the material’s chain?

Very cool! Fine tuning materials with parameters is essential in my work flow. I would love to see dynamic options added later. It could be useful to reassign material layers at run time. Especially for effects driven stuff.

Are there any platform restrictions?

What setup is needed to place a Material Attribute Layers node. Our artists are trying to use this new feature, but whenever we try and add that node in a material, the editor crashes. We turned on the experimental feature.

It looks like it is crashing in the material code when trying to get child nodes (“Layers” and “Blends”)

Hi Damir,

In general parameters are owned by the layer or blend that includes them, so if you include two “copper” layers to your stack each will have it’s own parameter sets appear in an instance. The shared inputs are an initial solution to allow sharing of logic and data between multiple layers.

Shared input collection assets (SIC) are intended to be a definition of available inputs that can be shared across multiple layers and blends. In the posted example the ‘NormalMap’ shared input is set in the material allowing any used layer graph to make use of that input by simply selecting the matching SIC and entry. This will cause the graph to internally connect those nodes when used in a stack.

The SIC asset is used for verification that connected inputs are valid and determining the types to allow connection with the rest of the graph. It’s also intended to provide a reference point for creators across a set of layers giving additional inputs that will be available.

You could hook a MaterialParameterCollection (MPC) node to a SetSharedInput node in your material to create a global parameter that would be available in all layers, though there wouldn’t be much benefit over simply using the MPC node within the layer itself.

SICs are one of the most likely to change features when material layering leaves experimental status so any feedback is most welcome.


Thanks for the quick answer, but just to verify my initial question - if I have one SIC, and I write to it in Material A, I won’t have access to those values in Material B? The values are “global” within a single top-level material (and all its layers and blends), not across different materials?


Hi Brenden,

Changing the layer stack is supported in-editor only. When a stack is created the graphs are linked internally and the relevant static permutations made which requires a shader re-compile. There was discussion of dynamic (branched) layers but it’s not something we’ve explored for the initial release.


I guess the power here is that it can be overridden by a material instance. So If i wanted a creature to have a damage effect, I’d make 2 material instances, one with a fire layer, one with an ice layer and swap the material instance on the creature (rendering only 1 layer from a shader complexity perspective).

Seems more convenient, because I’d otherwise be using material functions with a static switch. That said I’m not entirely sure if there’s any new power gained.

Yes, at least with the initial version the goal is workflow improvements over fundamentally changing the features or performance of the same material.

The common case that we’ve seen benefit is a library of layers used across characters or objects that have many variants or skins. Particularly when materials are created using the old layered approach, which was used extensively in games like Paragon, changing a layer required new materials or multiple switches which over time accumulates many unnecessary shaders and minor bugs. With the new system iteration and adding new material definitions (layer graphs) improves this process and minimizes additional graphs. We’ve also seen uptake in users exploring importing from other tools that offer layered approaches, easing the conversion to an unreal-friendly format.

Swapping part way through a project’s development to this format would likely not result in any significant gains, though planning and growth from the start of a project should result in less overhead later in the lifecycle than other material management methods.

Hi Vincent,

Under the hood this builds a material graph so there should be no new restrictions in the final materials you make. In the current preview we’re aware of some performance overheads and hitching that may occur when updating parameters at runtime, but once we hit the non-experimental release we’ll aim to remove any excess overhead where possible.


Figured out that if you have the “LivePreview” enabled, it crashes. So we have Live Preview turned off for now, so that they can try and hook things up. Hopefully with data put in, live preview will work?

Hi William,

Do you have reproduction steps and a log from your crash by chance? If you can see the MaterialAttributeLayers as an option when creating a new node in a material graph the feature is likely enabled as there’s only the single switch. Similarly the Layers tab in the instance editor is also hidden when the feature is disabled.


Hey Chris,

We created a brand new material. With the feature enabled, and Live Preview enabled.

We right click and add the MaterialAttributeLayers node. Right then it will crash on line 2123 of SMaterialLayersFunctionsTree.cpp.

Thanks - we will take a look at this and get back to you soon!


Hi William,

We’re having trouble reproing this on the release version of 4.19, so I’ve got a couple follow up questions. :slight_smile:

Can you repro this in a fresh project?

Are you using layers created during a preview build?

Also, if you are able to debug why you are getting a null pointer in GetChildHandle, that would help a ton. That section of code should just be tracing the struct holding the layer data, so it’s expected that it would return a handle even when you just added the node.


I’m sorry for the slow responses. All my UDN goes to my junk mail. And nothing I can do is changing it that.

I’m not sure what it means to be using layers created during a preview build.

I am able to debug, and I’m getting a nulltpr as it fails to find the property child with the name Layers.

Okay - we definitely want to figure out why you are hitting this, but as this is the main feedback thread and you have a specific bug, could you please create a new UDN post and we’ll keep debugging there?