Questions about static lighting

So I’m looking through the static lighting results and I’d like to find out how it works. Here’s an illustrative example:

So in this shot you can see the finished result, just the lighting only and what I assume is the static lighting radiance map.

As you can see, the radiance map is used to make the things around the orange door turn slightly orange, which is neat. The door itself is excluded. On the atlas you can see where the radiance is sampled from, plus a normal map to help fade out radiance from items not really facing the door.

So that’s all cool. But I still have questions:

  1. That was the low quality version - why is the high quality version transparent? What has been packed into the alpha channel? The HQ version doesn’t seem to be any larger otherwise?
  2. Where is the direct static lighting map with light and shadows on it? It’s not the one above, is it?
  3. How are those samples placed into the atlas? Is it just based on a grid, or is there a uv map somewhere, or is the atlas mapped to all of those objects in the way shown? If so, what UV channel is used for that?

Ultimately I’m aiming to modify the static lighting in code - any pointers?

First discalimer: I have poked around the engine, but only for a short while, and not in any great depth. What I say below could be mis-informed, but I hope it will help.

Answer 3 is that each object needs a unique UV map for light mapping (you set this in the mesh properties and/or when importing it.) That UV gets scaled into the big atlas you see on the right.
Exactly where which object ends up in static lighting bake is not well-defined – the light mapper in the Unreal Editor will create that layout procedurally each time you bake lighting, and even a small change may have big consequences to how things are laid out in the atlas.

Answer 2 is that Unreal bakes light “components,” not the full lighting solution. This is to support dynamic lights, even if they aren’t movable. For each light, and for each receiver, you will see the three options “static / dynamic / movable,” and for pairs of object/light that are “movable,” real-time solutions are used; for pairs that are “static,” the diffuse/shadow COULD be pre-calculated, but specular cannot; finally for pairs where at least one element is “dynamic” (and the other “static” or “dynamic”) you will see what you see above, which is a set of lighting components, but not the “fully baked” map. The rest is calculated in pixel shader at runtime. Even for static/static pairs, I believe specular is done at runtime, too.

Answer 1: No idea what they put in Alpha. Ambient occlusion? Something else? Check out the source code from Git and go spelunking…

I’m afraid that doesn’t really answer much of my question, but thank you for replying anyway!

I think the answer to 3 is that the existing UV maps are re-islanded into the atlas and then it’s used as the render target. I tried to prove this by raising the lightmap resolution of a panel and rebuilding the lighting, but it made no difference to the atlas, which is still only 1024x1024. I think this means that it’s only an irradiance map and has no relation to the shadow or lighting components.

Need more clues on where to find and view the level’s static lighting result. I guess I’ll have to look at the code again.

Let me preface this by saying, I dont normally deal with static lighting at all. But after taking a quick look, there are two types of textures relating to static lighting. The LightMap and The ShadowMap. What you have shown up above is the LightMap. It appears to be a series of encoded cubemaps from the lights perspective and has no relation to shadow map sizes or anything like that. I have no idea whats packed into the alpha channel.

The ShadowMap has only one channel, the red channel (all others are blank, including alpha, making it completely transparent unless you turn off the alpha channel for viewing). This one encodes the actual shadows per object, you will be able to see the UV unwrapping of each object packed into a series of textures. It should be fairly easy to find since it will be entirely red/black. And its name is normal ShadowMapTexture2D_#

Hopefully this at least helps partially answer some of the questions. As I said, I dont normally deal with static lighting, so this is just pure speculation based on what im seeing.

Thanks , that’s a huge push in the right direction! It’s a bit odd that lightmaps are viewable but shadow maps aren’t, but I’ll figure some way to display them.

May I ask where you found the shadowmaps? I’ve been trying to iterate over the UShadowMapTexture2D class in the asset registry, but I’m not finding anything. UTexture2D works fine but even with subclasses on I can’t find a shadow map.

Should be under WorldSettings -> Lightmaps. If you can find LightMaps, I figured you would be able to find ShadowMaps. As the Worldsettings/Lightmaps section shows both.

The mystery deepens. I do not have any shadowmaps, everything has “lightmap” in the name. I’ve definitely built the lighting for the sample I’m using, which is the scifi hallway.

I’m still on 4.8.3, what version are you using?

4.9 but I did my initial look under 4.8 using the Minimal_Default map, and the FirstPersonExampleMap. You could try obtaining them via the following function (Which is how the list in WorldSettings is populated):



TArray<UTexture2D*> LightMapsAndShadowMaps;
World->GetLightMapsAndShadowMaps(World->GetCurrentLevel(), LightMapsAndShadowMaps);


Thanks, that worked a lot better! I definitely have no shadow maps. I’ll try some other content.

Edit: lol I’m thick. Directional light was not set to cast shadows. On the plus side, my next step was to access them programmatically, and that’s already done. :slight_smile:

The bottom half of those look to be directional lightmaps. Not to be confused with directional lights, directional lightmaps are static lightmaps that store some limited directionality.

That is basically where the lighting is approximated by three separate lobes separated by 120 degrees. Each channel of the R,G,B represents a direction, similar to a normal map except these directions are spaced evenly like a Y from above in tangent space. I am pretty sure the top part is just the average color before applying the directionality. Then the engine is able to light from each of the three directions by performing a dot product encoded from that tangent based light vector from each. The vectors are just defined as tangent space vectors. So then you get a little bit of normal map response and three fake evenly spaced specular highlights.

You can see more about how this works by making a simple test level with just a plane as a floor, and then move a single point light around the plane and rebuild lighting and view the lightmaps. You will see how some specular directions are actually approximated better than others.

That’s pretty much what I concluded too, so it’s good to get confirmation that I figured at least something out on my own. :slight_smile:

I’m getting very close to achieving what I set out to do now, thanks for your help everyone! I do have one last question though:

How are the static light/shadowmaps assembled? Is there possibly a merged static mesh somewhere of every mesh represented by the shadowmap, with their combined UVs placed into that map? Ideally I’m looking for either a single mesh, or a way to tell how an object’s UVs ended up in that shadowmap and where they are.