I have been experimenting with the Paragon assets, which have some of the most complicated materials I have ever seen. I am considering using modified versions of some of the characters for a project. The aesthetic would be heavily simplified, so instead of trying to cut away functionality from the existing materials, I decided to start from scratch and create a master material for all the Paragon characters I want to use.
Unfortunately, this has proved rather difficult, given that Paragon characters for the most part don’t use conventional texture maps. Instead they have masks applied to different parts of the models, which are then filled in using various material functions. After a couple of days, I managed to create a system that allows me to use material instances to change which materials go into which masks.
The system works like this: I have a single material function, which I copy something like a dozen or more times within the master material. I feed different textures and values into it to control its appearance. I then feed all these variations of this material function into system of dozens of nested material functions, the top level of which has seven different outputs for each material I want to blend. Each individual output is controlled using a float parameter that can be adjusted in the material instance. This method allows me to use a single material for every Paragon character, using a much simplified palette of materials.
However, I am suddenly concerned that in my effort to simplify the process, my solution may actually just be less performant than creating multiple separate materials for each character, including only the material functions that I absolutely need within each one. However, I don’t know enough about material performance to know if my method is any better than using dozens of separate materials.
I think my main worry is that it’ll just be doing a bunch of unneeded computations; loading textures that might not even appear on the model; etc.