Per vertex lighting and normal maps

Hi everyone,

I’m experimenting with a deliberately retro / 5th-generation-inspired visual style.

My goal is to use very simple meshes (low poly) combined with detailed normal maps, but without using standard per-pixel dynamic lighting.

Specifically, I’m interested in using per-vertex lighting (Gouraud-style) as the main lighting model, while still having normal maps influence the final shading in a non-physical way (for example, modulating or perturbing the interpolated vertex lighting, rather than doing full per-pixel lighting).

I understand that “true” normal mapping normally requires per-pixel lighting, so I’m not aiming for physically correct results. This is more of a stylized / hybrid approach.

My question is:
Is it possible in Unreal Engine (via materials or custom shaders) to combine per-vertex lighting with normal maps used as a secondary modulation, instead of standard per-pixel lighting?

If so, what would be the recommended approach (Unlit materials, custom vertex calculations, etc.)?

No. The engine builds the final image per pixel by sticking all the layers.

What you can do is fake it by altering each layer to taste.

But its better if you pick an engine that isn’t deadlocked and do it the right way.

Also a .usf isn’t going to do much as the geometry cache you need to access to read per vertex things is not readily accessible ot editable like you would expect it to. You can still get around it, but again, better options out there…

1 Like

normal mapping is always per pixel. you cannot do it per vertex. how do you imagine that to work?

if you wanna do it per vertex you’d have to use the forward renderer and write custom shaders to basicly in the vertex shader compute emboss bumpmapping offsets for the (mobile spec) 5 possible lights per object and in the pixel shader you combine each emboss map with the individual light colors and modulate the whole rgb thing. you have 8 4d vectors inbetween the shader stages. so you should pack the data. 5 rgb triplets and 5 2d vector sets. you also should use bc4 greyscale compression to lower the memory bandwidth, cause you’ll have to randomly sample the emboss map 6 times at different locations, to compute the embossing of 5 lights.

regular normal mapping is cheaper, tbh. one sample and a dot product and some color modulation to get a better result.

1 Like

Thank you for the answer. Unreal is the only game engine I’ve ever used, and this lighting project is just an experiment, so my idea was to try it on a level I already have.

In any case, I’ll definitely keep what you said in mind, and I’ll look into other software if I don’t get what I’m looking for.

Take care.

I didn’t really have a concrete idea of how it could work, that’s why I asked. I was just curious about whether a hybrid of per-vertex and per-pixel lighting was possible and whether it would make any sense.

From your reply, I understand that it doesn’t. Still, I appreciate the technical breakdown of how it could theoretically be done.

No, but you do have volume driven lighting that is possible based on the mesh generated distance field - LPV setups. This is somewhat like you describe as it accounts for the geometry but using the distance fields, and still in the pixel side of things.

Honestly, I’m not sure what you are attempting to do - but - like you can change shader models (4 or 5) in the materials based on how you build the engine and publish the project, maybe you can pull the engine source and change just enough of the sm4 to shade with the gouraud-style functions instead of the (likely) phong shader that is in place.

The nightmare part is going to be forcing the engine to publish the project with just that shader model (which is why finding a different engine is probably overall easier).

The part I honestly don’t understand is why.

If the goal is to manipulate the normal map only by forcing the engine to consider the object physical direction, then material functions interpolated into the normal map are going to do what you want.

If the goal is to get something that looks gouraud like, then a post process where you use frac or other options (I suggest you look up HLSL and push it into a custom node) will probably do what you want. Afterall the engine provides a better image, so downgrading it is always possible.

The thing - and what you have to cope with - is that it’s completely backwards.

You have to spend computation to obtain something that if it were originally computed as needed would likely net a major savings on the final overall scene cost.

Speaking of, and come to think of it - try forward shading?

And maybe use that as the lead example to derive your code from and how things need to be set up rather than just changing sm4 to taste.

I think it could be a better starting off point as when you set it up the project will already publish correctly… so pull engine source, mod the forward shading and push/publish a project that way if you really want to bash your head on this :wink:

I came up with this idea while playing around with a plugin that simulates per-vertex lighting in Unreal. I thought it would be awesome if I could use a mask to control how strongly the plugin affects different parts of the mesh, so that some areas would be more affected than others.

But I’m definitely not going to bash my head against the engine that way :joy:

I guess I started building the house from the roof without realizing it. Since this is just experimental, I’ll try much simpler things you suggested, like using material functions interpolated into the normal map.

Thank you!

1 Like