How we plan to make lightmaps in UE4

The topic of lightmaps sucking and issues with using them in UE4 has come up. I’d like to address some of the issues as they exist in UE4 today.

The first is the pain of generating your own unique UVs. We have currently have in engine (only on windows due to relying on D3D) 2 methods to auto generate lightmap UVs. The first creates UVs purely from raw triangles, knowing only about vertex positions. The results are pretty poor because seams will show up in random places and the lightmap texels will be arbitrarily aligned. The second method uses an existing UV set and repacks it to be unique. This means lightmap seams will only exist where there are already texture seams, which artists will place intelligently. There are still problems with texel alignment, distribution of area, many small charts, etc.

I just checked in a replacement for the second method yesterday. This is our code so it runs on all platforms. It will repack an existing UV set but much better than the prior method. It won’t rotate charts randomly so the texel alignment will match what the original UVs had. It distributes UV space according to surface area so the lightmap texels will be uniform in size. It will merge UV charts where it can meaning less tiny charts wasting space. In many cases the packing is better than artist provided UVs. It does all this in far less time too. It isn’t perfect but it is **** good.

The next step, which I’m working on now, is to make this an automatic part of the mesh import and building pipeline. By default meshes brought into the engine will use this to generate lightmap UVs. If you have already created some you can specify which channel they are. You can also decide they aren’t needed, in the case of a mesh that is never placed statically. This should prevent the situation mentioned where the lightmap process isn’t understood by the artist early on and tons of stuff needs to be touched later. Instead, if the artist isn’t aware, everything will already have auto lightmap UVs generated and things just work. If fine tuning needs to be done it is much better to start from a working case than a nonworking one.

This auto lightmap UV work is step 1 in my vision for a making the static GI solution a black box. Ideally, the details of how precomputed lighting is generated and stored won’t need to be understood by anyone but the most advanced users. We do a pretty good job in some areas compared to other offline renderers. For instance, the user of lightmass doesn’t need to know anything about the fact that we use photon maps and irradiance cached final gather to calculate GI. Many offline GI solvers expose that and a ton of confusing parameters associated with it. I’d like the same to be true for lightmaps. To use lightmass you shouldn’t need to even know what lightmaps or lightmap UVs are. Most importantly, the fact that lightmaps are 2d textures mapped to surfaces shouldn’t be important to know. Any sort of GI solution that isn’t per pixel will have a resolution so artifacts related to the resolution GI is calculated at can never be completely solved but I feel the artifacts specific to 2d textures can.

Concisely, the vision is that a user should be able to just drop some random models in a level, click build and it just works.

The next steps for achieving this vision:

  1. Smooth texel mismatch seams using least squares fitting. This trades a minor amount of blurring for making texture seams practically vanish everywhere. http://miciwan.com/SIGGRAPH2013/Lighting%20Technology%20of%20The%20Last%20Of%20Us.pdf
  2. Lighting needs to exactly match on flat neighboring meshes. Unfortunately, due to the use of irradiance caching and the fact that we distribute the work amongst many machines this is hard to do. Meshes right now are computed completely separately, sometimes on different machines. Making sure the lighting matches across the edges of two placed models in the world without shooting 100s of rays for every lightmap texel is very hard. There isn’t a straightforward solution here but I’m sure there’s something we can do to fix this.
  3. Lastly there is the potential to leak between UV charts that are packed near one another due to filtering. Some of this comes from sloppy handling of UV packing which we can be much stricter on.

Everything won’t be fixed immediately, but trust me, we are aware of the issues and intend to improve things dramatically.

Lightmaps aren’t some ancient obsolete concept. Some of the best looking games around continue to use them for good reason. There is no more efficient way to map high resolution color data to surfaces than 2d textures. This is true both for memory and performance. If a reduction of one or both is acceptable other options become viable but there is simply no better way to get low cost, very high quality, indirect lighting.

Regarding the new UV generation technique (based on the existing UV) - is there any way to make it not carry over (or even fix up) overlapping UV coords from the source UVs?

Right now I tend to use the method that generates unique ones without basing it on the mesh UVs, because there are so many meshes that have overlapping UVs.

I don’t know if that’s even possible, but it would be an feature.

This is great news, I look forward to seeing the results!

I’m not sure I entirely understand the question but the technique takes non-unique UVs, ones that have UVs stacked, mirrored, tiled, or overlapping in arbitrary ways and repacks them uniquely such that there is no overlap and each UV chart has the correct amount of space between them to prevent leaking from bilinear filtering. There are a few edge cases of overlapping UVs it doesn’t solve correctly but I have yet to see them show up anywhere in practice (it doesn’t handle spirals in UV space).

Ah derp, I guess I was probably just misremembering then. Disregard!

This is just so amazing. Thank you guys! You rock!

That’s really great news, I look forward to seeing those improvements.
I have a question though, I’ve done light baking in Vray and it doesn’t have the issues with matching lighting across seams, what’s different compared to how the engine builds lighting?

Is there any chance if maybe we can get in World Setting under Lightmass something like Advanced tab which expose some of the parameters from BaseLightmass.ini that can be tweaked and maybe to apply only for Production settings?Personally i dont have problem editing .ini files but is way more convenient if we have this in the editor.

Btw is there a limit to how far the secondary bounces travel?I’m asking because i’ve tried this tehnique with the reflectors from the ArchViz thread but using one really big plane that is lit by spot light but after the bake it didnt have any effect on my scene which i guess have something to do with the range of the bounce light affecting the world, if there is such thing like range of course. :slight_smile:

Vray supports many different integrators including irradiance caching. I’m not sure if they support distributed rendering of a single frame, most off line renderers do not, but irradiance caching is an inherently serial algorithm. For a good run down of how irradiance caching works check out this course http://cgg.mff.cuni.cz/~jaroslav/papers/2008-irradiance_caching_class/

It places new cache records based on existing cache records. In a second pass it interpolates the cache records. Generating the record’s value, which is primarily just tons of ray casts, and interpolating, which is reading the cache and evaluating for a specific texel, can be completely parallel. Placing cache points can’t and still follow the traditional algorithm. Lightmass currently runs the algorithm from scratch for each mesh separately. This means they can be distributed to different machines. The problem is the interpolation step won’t interpolate from records of neighboring meshes which means the interpolated result will be slightly different. One way to fix this is to introduce a sync point in lightmass and build a global irradiance cache. That will slow things down. Additionally we’ve seen weird artifacts from generating cache records with tighter spacing than the algorithm expects. Basically there shouldn’t be records with overlapping radius. We would need to somehow solve that.

This is great news. Thanks for sharing your vision and the progress you’ve made, Brian!

This seems like a pretty reasonable way to significantly improve the developer workflow in the lighting path. Fully dynamic GI won’t likely be commonplace and streamlined for many years, so it’s really cool to see you guys are still making huge strides in the static lighting path until we get there.

Are there specific ones you want? Many of the settings folks have posted as the high quality set don’t really have an impact. If all you are looking for is to dial it to 11 we can add that.

I’m not sure about bounce length. I don’t think so but is the man for that.

Great news!

Is that expected to happen in 4.5 or very unlikely?

Thanks Brian! I love the first step for auto UVs. This is something artists dread and has failed to go right many times before starting out projects with new artists.

Is this sharing of vision something you guys plan to do more often? Because I love it and gives great and critical information on how we as developers move forward with our own development too!

That (and most of the part of Brian’s post) is exactly what I was talking about in the thread when everybody (well most of) tried to convince me that how the things are at the moment cannot be changed nor improved and I should simply put up with it.

Brian thank you.

Sounds like amazing news! Really looking forward for this, current lightmaps calculation allows great results but also is really prone to artifacts, and it’s definitely the aspect on which I find myself spending more time to get good results.

Also, I’m specifically interested in archviz rendering too, so very accurate indirect lighting quality for relatively small indoor environments is what matter most, independently from baking times.

One of the issues I get more often is due to shapes that have not only perpendicular edges. If one of the edges of a wall has to be oblique in many cases I have to straighten the UV in the 2nd channel in order to avoid jagged artifacts due to lightmaps resolution. In these cases, thought, the texture UV need often to remain oblique. How do you plan to address that?

This is great news. This saves me a lot of uv editing time.

I like the sound of this fo sho!

Thanks @Brian Karis , that sounds very promising :slight_smile: . I really appreciate how you Unreal dev & community guys communicate with us. Looking forward to the update with new and shiny features :slight_smile:

Ah, ok. Vray certainly does take longer to render than Lightmass

The auto lightmap UV code I wrote only lays out some existing UVs. If the existing UVs don’t align with edges (they mostly will) you can modify the input UVs to help it. Right now the only option is to use UV channel 0 as input. There is no need for that restriction. I’m going to allow you to specify the input and output UV channel meaning if you want to make a tweak to existing UVs to input to the auto UV layout code which does the rest of the work you can. So for example, you have your normal texture UVs in channel 0. You want to make some tweak to those UVs to use as input for lightmap UVs, do that and put them in UV 1. Then during import say that auto lightmap UV source channel is 1 and destination channel is 1. It will then uniquely lay out your tweaked UVs and use them for lightmaps.