Texture and Material optimization; Memory VS. CPU

Hey folks

I’m trying to optimize my textures/materials in my project. I’m currently manually adjusting the hue and brightness outside Unreal Engine, so if it’s brighter or has a different hue, it means a different texture, with that technique the frame rate is stable, but this means each individual textures will be stored in the memory while I could just adjust a few numbers instead…

Now I tried optimizing things up by resorting to 1 texture and adjusting the hue and brightness within the material instance instead, this way I would save on memory. I tried adding ‘‘Blend overlay’’ with a vector4 parameter, this way I could adjust the hue of each material instance. Same for the brightness. So after setting that up, I packaged the project and my frame rate dropped to hell on my lower end test computer.

Now here’s my question; I was wondering if there’s a way to adjust hue and brightness of a material instance without affecting CPU performance, granted the test machine doesn’t have a dedicated GPU and my prior attempt didn’t work (see screen shot)

(Failed attempt at optimizing)

T64_PPlateYellow.png](filedata/fetch?id=1856204&d=1611598080)

The vector is supposed to be a vector parameter, sorry about that, I just remade the material to screenshot it.

When are you setting the parameter? On tick?

You mean if I call it from a blueprint? I’m not really sure how I could set a parameter on tick otherwise, it’s not a dynamic material if that’s your question.

I set the parameter in the editor prior packaging.

Hm, Desaturating the texture to 0 so it’s grey, and then multiplying a vector parameter allows me to adjust the color without the ‘‘Blend overlay’’ node, yet now I’m really skeptical about the CPU cost of doing anything in the material editor.

I’d really like to know what’s up with the TICK on the material because I can’t find anything about it. Otherwise if it requires constant processing power just to change a hue through a parameter prior packaging I’m gonna forget about it and keep baking my textures outside Unreal. *memory save is not substantial enough to hinder the CPU

Shaders have no cpu cost, they are ran on the gpu, integrated is not an exception, unless you don’t have a integrated or non-integrated gpu, then your pc will fail to boot or will not display, rendering it impossible to use (Pun intended, only exception to this is iGPU does steal cpu ram, which is not your issue as other textures were not added).

Otherwise, there is a bug on your end, as 2 or so shader instructions is so little its completely impossible to measure on any system that can actually launch ue4 games.

The computing device I used is a Samsung Galaxy Core LTE.

Maybe an issue with unreal 25.4, anyways the node ‘‘blend overlay’’ isn’t working for me, ultimately the frame rate drops when I use it. Like I said, it’s not that big of a deal, each texture is at 2.68kb in my stats editor. I’d need 400 of them to fill 1mb, but still when you see a possibility to optimize; explore it.

Is the HueShift node too expensive?

I’m starting to think any nodes with a material function will be too expensive. But I did built the graph with the ‘‘HueShift’’ node and the stats are showing around the same quantity of instructions as the ‘‘Blend Overlay’’, I haven’t play tested it as it takes some time to make a parallel build and adjust the parameters on all my textures.

But the graph I showed with the monochrome texture has the less amount of instructions.

With a grayscale texture you can change the color simply by multiplying it by a color. That’s about as cheap as you can possibly get.