Does anyone have any guesses as to the method core to the new GI solution? Is it related to RTGI, LPV, some kind of voxel based solution? I know that the results are accumulated over a few frames. I saw mention of shadows being pixel accurate but I am wondering if we will have to contend with any sort of GI light leakage.
Maybe some kind of lighting probes, dynamically updated / accumulated over a couple of frames. Could be Radiance probes, tracing Rays from the surroundings and affecting only the ambient light term of the BRDF equation. They could be quite low res as the rest of the shading would still be rasterized.
My guess, some kind of voxelized approach.
If you look at the demo when Lumen is turned on, you can see in the first frame that some shadowed areas light up, only to get darker as the solution refines. That might indicate an initial low-rez approach that increases resolution over time. Epic says their solution has infinite bounces, which is maybe a clue?
Whatever, their solution looks great, and I’m not seeing any of the blurryness that is evident with RTGI.
Turns out, it’s a clever combination of methods:
Looks like they are using a combination of their previous SVOGI system
an improved version of their previous distance field / height field GI they introduced in 4.8 and kite demo
and finally their current screen space GI.
That is incredibly clever. These devs are kinda smart.
DFGI has artifacts on small, extreme details. I don’t think that’s it.
It is part of it. DFGI makes up the mid-size portion of the GI. Screen space techniques seem to fill in the micro-gaps.
And skylighting seems to make up large distance, there’s a new relatively low cost skylighting update coming in 4.26. There’s also some mention of “voxels” for very large objects, but no details were given on how large or how far away.
Also seems expensive, 1440p 30 on a PS5 (highly overclocked 5700xtish) with a single little flashlight and one sunlight. Ohwell, looks neat and it’s a big step up from UE4’s current “either use lightmaps or try to hack out your own stuff with the weird assortment of random tools given/rely on our slow raytracing”.
I presume that average pc can handle up to some million polys with Nanite as then casual sata SSD will be bottleneck after that, but even that would be great??? JUST wild guess??? anyone can offer better numbers i am very interested. Sata SSD is still most widespread among casual users.
I also wonder if you could claw back performance by slowing down the update rate of the GI and just cache it over a longer period of time.
Lumen is exciting, I hope the devs will share more technical details.
Just wrote my thoughts on it : ArtStation - The need for Lumen by Simon Majar
Thanks for writing that out!
My guess is they are accumulating the GI into probes alike those in the nVidia DDGI presentation using a combination of tracing methods like distance fields and voxel cone tracing.