Currently dealing a lot with lightmapping, there are 2 questions I would like to discuss
1. UV alignment raster in modeling package
Almost all infos on the web refer to calculate grid size for UVs using Grid size = 1 / Lightmap Resolution
.
Reading the [Docs from Epic][1] shows this formula: Grid size = 1 / (Lightmap Resolution - 2)
1 / 62 = 0.0161290323
Here’s a quick picture with Resolution set to 8 pixels and 2 variants of chosing the grid raster in Blender. There’s just a quad unwrapped and aligned to the raster. Left: using 6x6 as to above formula, right using 8x8. And extra 1 pixel padding added by UE4 outside.
When padding is added by Unreal Engine at bake time, most UV values will be misaligned in the resulting texture, if using a raster of 8x8.
There are some discussions on that, but even the gurus describe the way of using 1 / LMAP_res approach, which makes me uncertain. I tend to do it the way as shown on the left side of my image and I’m almost sure, that this is the way to go, because of Epics documentation mentioned above - well, almost, so anyone having the definite answer?
2. Power of 2 for Lightmap Resolution of static mesh - really?
Sources on the web all mention to use power of 2 values for lightmap resolution, just an example [here][3].
I found that entered random values for Overridden Light Map Res on a mesh get rounded to the next multiple of 4 value in the entry field automatically - there must be some reason for that.
I did a test with resolutions 96 and 32 and found, that these get perfectly aligned into a 128x shadowmap texture.
This actually makes sense to me, because in the end, the final baked lightmap/shadowmap textures need power of 2 (streaming?) but not the parts of which these get assembled by lightmass.
Not being forced to power of 2 jumps for some gain in quality brings quite some potential memory savings in my opinion. Any thoughts on this?
Thanks for some discussion about these points.