Lumen GI and Reflections feedback thread

Lumen+ VR is comically powerful. I remember experiencing the Oculus Quest 2 for the first time and being blown away by VR as a spacial medium, even with the incredibly limited environment fidelity and baked lighting. Now that we can have dynamic, per-pixel GI in VR, the possibilities for actually delivering on VR’s ambitions are a lot closer. I’ve had a fair number of people hype VR design to me, but I haven’t yet found a headset that meets all my needs.

I feel like I understand most of how lumen is architected at a high level at least, but even after at least skimming this paper I can’t quite understand what it’s talking about. The project leader is clearly very enthusiastic and aware of the latest developments in computer graphics, but I can’t quite understand what he’s talking about.

The core idea of trading off radiance solving methods at different resolutions does appear to be true for the both of them: lumen’s tracing methods go: contact AO-screen probes-world probes-skybox, and the things it’s tracing against are screen space- surface cache-far field- skybox, at least to my knowledge.

If I understand it correctly, the radiance cascade paper is making the argument that real-time path-tracing via interpolating noisy samples can’t really work (due to the amount of interpolation needed), and in a sense lumen works similarly: the screen probes represent many rays (8x8 normally) bundled together, effectively sampling 64 rpp. Those probes aren’t placed every pixel however, just wherever is going to gather optimal lighting information, and the radiance probes are then interpolated to the GBuffer. This means that the lighting contributing to the final image is actually incredibly stable.

what is this about? throwing buzzwords? that’s very amateurish, sry. maybe you should learn to build and bench lumen.

lumen does all that. and better. those cascades fail to impress. looks like fuzzy shadows and that’s it. no real GI shown. and lumen does proper GI, which is not bound by screen space.

what do you think illuminates the ceiling? that lamp is in the sky and very much offscreen. so is the floor. screenspace GI w/e. that would not do that.

1 Like

I’m farer than you to understand it, @jblackwell :sweat_smile:, but you have clarified me some things, thanks.

Thank you for your kind words @glitchered . Sorry for not being as expert as you are.

I have posted it because it has been published in 80 lvl, so it must be ‘rich’ in some way. It could be interesting to curious people and/or the Lumen team.

I don’t know if it’s only screen space or not, but I have seen some parts in this video being something like offscreen. Don’t know if the GI, but at least the ‘direct lighting’:

Anyway, you can’t tell me that Lumen has no limitations, noise, nor performance issues.

this looks cool. nice soft lighting. but i don’t see much interaction. this is not GI per se. it’s volumetric illumination, but no surface color pickup and transport of it. bounces. the core of GI.

it’s like just emissive shapes transporting light via a volume. all white surfaces, no bounces. might aswell do it with a modified diffuse term and some lamps inside the beetles and statues.

Apologies, didn’t mean to explain over you, that wasn’t my intention.

Wow, I’ve never seen radiance behavior like that before. That’s a very interesting technique!

The paper did show reflections working, although I still cannot understand how. From what I’m reading the paper is more concerned with a final gather method than the media that is actually being traced against, but I could be wrong.

yep. the paper has some nice shots. the coder is definetely a demo scener. :smiley:

thinking about technicalities, it seems to be a mip mapped volume filled with light intensity values. and it’s raymarched. when you hit a surface you march along the normal and get the diffuse response. when you march along the reflection ray you get the specular response. i done some marching myself. just some ice clouds tho.

He has some videos in his channel, with some different situations and tons of different advanced experiments, not only realtime GI; even created a Path Tracer 7 years ago. Some demos are newer, some older. For example, in this one, even if it’s screenspace, it’s from 5 years ago (!). The new demos seem to have offscreen, and I suppose they could be animated as this one, too:

Noo, don’t worry! It was a sincere comment, just informing and thanking you, not reproaching.

1 Like

you fanboying the dude? why you tryna smooth talk this technique to me? i’m not convinced it’s of use for lots of quality 3d scenarios. i watched this video that explained what the data looks like. a mip mapped volume of cubemaps. wth. i was close. cubemaps enlarge my data estimate by x1536. the storage implications of this volume are humongous. and it’s still not doing offscreen. years or 6 months ago.

it’s a nice illumination technique, they used it for their game and it looks good for their perpective. but… still got it’s technical limits in the grand scheme of things. it’s like any other “product” / technique.

either way… i think we’re derailing the feedback with wishes and tech banter. /tangent

I have no technical idea about this. Just letting you (and the Lumen team, specifically) know the existence of this guy. Maybe he could even give a hand with Lumen, who knows.

Just to finish, where is the orange lighting coming from?

1 Like

(i noticed: for some reason shadow mapping kinda broke in 5.3.2. right hand prop doesn’t cast mapped shadows anymore. raytraced shadows still look fine tho. soft and hard contacts. (i did a dirt cheap fix on the queen’s chromatic function, btw. i dunno what the shader artist thought there :)). i’ll have to find a way to load the lite city park in this config. it runs great in it’s own minimalist setup. in the full blown project i can’t load the showcase map anymore.)

fixed it. the lights were stationary. the shadows appeared in editor, but not ingame. i set them to movable and they worked again. dunno for sure if i changed mobility or this was always “broken” like that. hmm. the shadows are actively rendering either way.

Lumen does not handle cast shadows

it’s all part of the lighting equation. at this point and in this screenshot everything is raytraced in an hbv or 2. which is great, btw.

I’ll tell you one of the reason why I like that guy and this methods.
Him and the team Spoke about how TAA falls apart in dynamic scenes.
It’s not that couldn’t implement it, it’s just not good enough for their game.

The reason why his GI doesn’t have noise is because it isn’t designed on the crutch of TAA/frame blending as an option. He mentioned his can work fine for offscreen information using world space information.
Seems like it needs a small amount of fine tuning.

Same presentation Miguel1900 shared but here is the GI timestamp:

1 Like

Thank you @TheKJ .

Now I have seen he mentions Lumen too, so it’s an informed guy presenting a (presumibly) new/unknown technique, as he is up to date about already existing methods and techniques:

As far as I can understand he seems to mention that Lumen is using one unique (or two, not sure) resolution for the probes, while his technique uses high resolution for closests areas and sucesive lower resolutions for farer ones, being exponentially cheapers without loosing quality. And I think he says it doesn’t need denoising, quite interesting regarding Lumen’s noise.

This is Lumen’s grid… in a random scene with default (cinematic) settings:

1 Like

Probe resolution is actually variable to an extent, for world probes at least. Probes nearest to the camera are supersampled, although I’m forgetting the heuristics that drive it. There could be more variability but I would need to reread the presentation.

1 Like

Stochastic Shadows question for @Krzysztof.N (As there isn’t currently an SS thread), will stochastic shadows support translucency, or should they only be considered an opaque shadowing solution?

For the love of god, if they are Stochastic , allow them to accumulate independently of TAA and Upscalers.

This is Lumen’s grid… in a random scene with default (cinematic) settings:

@Krzysztof.N
It would be worth investing in a jitter pattern that only has upper right diagonal point, and a lower left diagonal point. Two frame accumulation is extremely responsive. I would say lumen just need better logic to pick good candidate parts of the screen of light channel interpolation.

Also, many games use a static environment but have moving suns. Do you think there would be a way to bake probe trace information only for directional lights movement patterns?

how did you produce that grid? i can’t get that done. isn’t this direct lighting in the first place? what’s the light source? this could be the issue there.

edit: nvm. dirt cheap to test. an hdri with a sharp sun exposes the lumen light grid or some sort of grid like structures (depending where it hits).

and there are alternatives for hdri. just put a sun in the level. ez