Everything will explained be in docs. I’ll try to explain there everything as truthfully as possible, as I want everyone to understand how to use it and be able to make a good call whether he should use it for their production.
The general idea is that it’s like Next Event Estimation (direct lighting) in path tracing. Each pixel traces a fixed number of rays towards lights. If any of those rays hits a light then we compute its influence on a pixel and add that to the pixel’s color. At the end there’s a denoiser, which tries to keep image reasonable when we have more lights per pixel than available rays. There’s also some ray guiding so that instead of tracing rays randomly we trace them towards lights which seem to be more important.
The important part here is that it replaces both shadowing and direct lighting (BRDF evaluation, unshadowed light apply). Basically all deferred renderer lighting bits scattered around the frame are removed and replaced with a single MegaLights pass which does a single pass covering shadowing and lighting for opaque/translucent/fog. Though translucent isn’t supported yet :).
From the user perspective as lighting scenario gets harder (more lights having strong influence on a given pixel or large area lights casting shadows through complex geo) lighting quality decreases as denoiser need to do more work. At some point it will overwhelm the denoiser and you’ll see noise and similar artifacts.
To keep it working well try not to place lights inside geo and narrow their bounds as much as possible (spot angle, rect light barn door, attenuation range). There’s a debug view which shows which lights affect a given pixel, which turned out to be quite useful in the demo to find huge lights on the other side of the level or just buried inside walls, which won’t ever affect anything but still need to be computed by MegaLights.
Another limitation is BVH - you need to make sure that all important details are there and that they aren’t culled too early. BVH is shared with Lumen, so that’s not so bad as you tweak it once for both direct and indirect lighting. Some of the missing geo detail can be restored by screen space traces, but they do have their own limitations and artifacts. As we’re working on the HWRT path and its becoming more common things will improve here.
Finally everything goes through a denoiser, as lighting scenarios get harder or there’s more movement denoising gets less effective and you’ll see more artifacts. Denoising is certainly something we want to improve, as it has a few missing bits which are required to handle things like racing games or fast lighting changes.
Overall, it does require careful setup and has edge cases, but also in practice it does work surprisingly well given a tiny console budget if you know what you’re doing. I was really stressed out about it, but during demo production after a few pointers artists were really happy with the workflow and had lots of fun finding new places where they can add hundreds of lights. Many of those lights were excessive and likely would be removed in a real game, which doesn’t try to showcase new lighting tech. The entire demo was also made in 2 months by like 5 artists, so there isn’t any special demo magic and you should be able to recreate everything you saw there on your end.