AI for 10x Faster Light Baking. Useful?

Hey all!

Early 2021, I started working on an AI that significantly accelerates light baking, achieving speedups of 2x, 4x, and even 10x.

This works essentially by using a Neural Network in tandem with an existing light baker. The Light baker does part of the work and the Neural Net does the rest.

A lot of people are familiar with denoising and the idea is similar, but different.

(Note: These speedups are approximate, as only some parts of the light build are currently accelerated by this factor.)

  • The speedup multiplier between renders using the AI is the primary benefit, and will apply generally to any scene.

  • The absolute render times are not of interest here, these will vary depending on hardware, but the acceleration when using the AI will be 10x (for example) from whatever the base light bake time is on any particular machine.

A clearer explanation from Sebastian___:
if your previous bake took 10 minutes, with this solution it could take just 1 minute.

Here are some screenshots showing the results using the AI. (In the “Lighting Only” view.)

1x (No AI Used, old GPULightmass) Time Elapsed: 2924 seconds (48.7 min)

4x Time Elapsed: 879 seconds (14.7 min)

10x Time Elapsed: 464 seconds (7.7 min)

There is some tradeoff of quality for speed here, but it’s worth it IMO for development iterations or certain use-cases.

A couple closeup comparisons:



I put this project on hold after seeing the raytracing and Lumen technologies evolve, thinking that light baking might not be relevant anymore.

But I’m revisiting the idea now and I’m trying to gauge interest:

  1. Would this AI tool benefit your work despite Lumen and raytracing advancements?
  2. Would you consider it a worthy investment if it was offered commercially?

Your responses will help me decide whether continuing this project makes sense.

If you’re interested enough, use this google form and leave your email. I’ll contact you if the project moves forward: https://forms.gle/jydVAWbgTFyJqnqw6

I’m looking forward to your responses! Thanks in advance!

EDIT: The important takeaway is that whatever your current light build time is, the result will be 10x faster on your hardware, accounting for some overhead that doesn’t get sped up.

Sounds really interesting, especially since UE53’s light baking is a complete broken mess atm. Signed up!

10x Time Elapsed: 464 seconds (7.7 min)

It’s difficult to form an opinion on this because I don’t know what kind of hardware you’re running this on or anything about how it works. But for such a simple scene, 7.7 minutes still seems like a really long time.

Just for reference I would expect CPU lightmass to build this at production quality in under 2 minutes on any modern processor.

I don’t want to discourage you because it could be useful and interesting, but only if you got the build times down much lower. In order to be a practical option for Unreal users I think it would also need to support all the existing lightmass features (GI, volumetric lightmaps, indirect lighting intensity, stationary lights, etc.)

Thanks for the reply!

The important takeaway is that whatever your current light build time is, the result will be 10x faster on your hardware, accounting for some overhead that doesn’t get sped up.

Some additional info if you are interested:

  • The project is currently based off of the unofficial GPULightmass from a few years back (luoshuangs) and supports all features supported by it, right now.

  • In principle it could support CPU Lightmass but I didn’t go that route since generating the training data would’ve taken a lot longer. The changes to do so wouldn’t take long.

  • This scene takes a long time to render because the lightmaps are huge. The density is red on every object. Once again, the multiplier between renders is the benefit that is offered.

No for either.

Dynamic light would cost me a lot less, and give a more than decent “preview” without having to bake stuff off of an already inaccurate gpu bake that will never pass as an end product.

If you were somehow able to do this for a CPU bake - which I usually reduce with swarm, coordinator and around 10 CPUs - then that would be something to consider investing in.

Unfortunatley since CPU and GPU bakes are almost day and night of difference in terms of end quality, I don’t really see any potential in lowering the imprecise GPU bake time - at least until the process is able to provide decent or even better results than the CPU side…

Maybe you can phrase this in an even more clear way, something like:

  • if your previous bake took 10 minutes, with this solution it could take just 1 minute.
  • or it could take just 10 seconds if it previously took 100 seconds.

Is it denoising the output lightmaps? Or is it actually involved in the raytracing process somehow?

Lightmaps still have a lot of value for projects that need to be extremely performant such as in mobile/vr, or just PC games that people want to run as good as possible. Sadly Unreal isn’t a very popular choice for mobile games though so I dunno. You probably had the right idea using the mailing list for guaging interest.

It can be done with a CPU bake, there is no effective difference except that the CPU bakes take longer to generate the training data.

In my experience, the GPU bakes have been pretty high quality. I haven’t used the new GPU Lightmass though (this is using the old unofficial one which I believe uses a different algorithm than the new one).

Denoising might not be the right term, but the light baker does part of the work and the neural net does the rest.

The idea is similar to denoising, but different.

Lightmaps still have a lot of value for projects that need to be extremely performant such as in mobile/vr, or just PC games that people want to run as good as possible. Sadly Unreal isn’t a very popular choice for mobile games though so I dunno. You probably had the right idea using the mailing list for gauging interest.

Thanks for the input!

Added to the post thanks!

The 2x speedup model went missing over the years and I was unable to get any screenshots of it.

I’m training another one right now and should have some samples up in a few days.

The 2x model got about half way through training and it’s output is already looking pretty good.

The quality loss at 10x is pretty evident, but you have to look hard to find it at 2x.

So you can get 2x faster with almost identical quality levels.

Since we develop VR titles and use GPU Lightmass on a daily basis this would be very interesting.

The 2x model looks like it is finished training.

The 10x model is probably actually a little undertrained.

In comparison to the previous 2x model, a lot of the finer details improved and the noise in the less lit areas is quite a bit better.

I prepared a gif so you can better see the difference between 1x (no AI, just GPULightmass) and 2x.

2x-comp-1

IMO the quality loss is negligible, even for really high quality scenes.

If you weren’t looking at a direct comparison, it would be tough to even know the difference.

I’m going to prepare some more screenshots, with more indirect lighting and more complex scenes.

High Res for anyone interested in inspection without GIF Compression:


1x


2x

VR is a common theme I keep hearing!

Here’s those shots of the indirect lighting with 2x.

You can see some noise gets introduced, especially in the less lit areas, and some slight issues with the UV seams (I have a pretty good idea of what is causing that).

Overall though, it looks pretty good, and in the “lit” view mode it looks great.


1x (No AI)


2x

The bake timings for this scene are:
1x - 4555s
2x - 2386s
4x - 1299s
10x - 623s

1 Like

Usually that kind of noise or artefact is visible on clear surfaces without textures or shaders and becomes invisible/not noticeable if there are shaders/textures.

Here are the “lit” view modes for comparison:


1x


2x

It does seem a little bit more visible on the floor.

I’m fairly confident this effect will improve as the architecture of the neural net is modernized.

I just left a 4x model training while I did other things this week and forgot about it.

It ran twice as long as other training runs where I thought they were reaching a plateau and is still showing improvement.

Hopefully I can produce some screenshots next week for some longer trained models and see if the noise in those areas improves.

You could probably look into Nvidia’s new ray reconstruction stuff since it basically does AI path tracing in real time. I’d imagine there might be some way to leverage it for baking light, but it would likely take some work.