Lumen GI and Reflections feedback thread

Have you seen Lumen Performance Guide? There we propose a following scalability scheme:

  1. Epic GI + Epic Lumen Reflections
  2. High GI + High Lumen Reflections
  3. High GI + SSR
  4. Medium GI (DFAO)
  5. Low GI (Unshadowed skylight)

“High GI + SSR” is what we usually use on XSS, which should be comparable to lower end PC GPUs. Is this still too expensive? Can you show a ProfileGPU with async disabled (r.RDG.AsyncCompute 0) and explain what performance are you expecting there?

Yes. Lumen caches the most expensive things in screen space, which allows to not be limited by parametrizations like probes or lightmaps, and to automatically adjust quality to what’s visible on screen. The downside is that screen space is a lot of data and there’s always something changing, so you need to recompute a large part of it each frame.

There are also a few other caches like World Space Radiance Cache or Surface Cache, which employ aggressive caching based on this assumption that usually things don’t change in the scene, but they still need to update slowly as tracking scene updates would be even more expensive.

If you want to cache more aggressively then you need to move away from screen space and cache your lighting in e.g. world space probes, like DDGI, but this comes with various downsides. We did experiment with it and there’s a prototype under r.Lumen.IrradianceFieldGather, though it seems to be broken now. But it just not clear whether a grid of probes around camera would be a good tradeoff. There’s lots of leaking and other issues and the only way around them is manual volume placement. Runtime DDGI probe update may be expensive too, as you need to trace a ray and then evaluate material and direct lighting at each hit point.

So basically at that point it feels like this solution would be in this weird spot between Lightmass and Lumen, where it’s neither fast enough to cover all Lightmass plaforms (mobile, Switch, VR), nor good enough for Lumen platforms (consoles, PC). It still may be a good solution for some games, just like LPV was, but it’s unclear whether it would be used by many engine users.

Yes, it’s possible, but it won’t fix all the issues. The general limitation is that probe has limited resolution and can’t represent lighting changes between two probes. DDGI augments that using VSM as a shadowing function, but it’s a very lowres shadow map, which behaves just like you would expect a 32^2 shadow map to behave :). The problem is that those lighting changes aren’t just small scale indirect shadows or similar detail, but it maybe be for example outdoor lighting leaking indoors and breaking the entire scene.

You can’t also increase the amount of probes too much, as their cost grows really quickly due to them being a volumetric representation. Like if you would want to get per pixel quality instead of a single value per pixel you would need to have ~8 probes, where each one has like 64 values for each pixel.

Usually it’s just impractical to track updates. A door opening may influence half pixels/probes in the scene, but to know which ones you would need to trace rays… But if you only trace rays for entire scene if something changes in the scene then you either get hitches on any small movement or some of those movements will result in incorrect lighting.

Yes, but it could move towards a solution like Enlighten, where only form-factors (visibility) is baked, but then you can adjust your lighting in real-time. That feels like a good tradeoff and likely could cover all Lightmass plaforms.

1 Like

Yes, I’m already using all of that, and performance is generally good on my end. However, the lack of scalability with Lumen on low and medium settings can sometimes make it challenging to reduce costs. I need at least a minimally acceptable and stable global illumination because the game requires players to use their flashlight to explore in darkness, so simply disabling Lumen isn’t an option.

From what I’ve seen (in GDC talks or on their YouTube channel), Enlighten does indeed seem very efficient. It’s not perfect, but it would probably be a very nice middle-ground solution to explore with this kind of approach. :slight_smile:

If this is mostly about a single flashlight then LPV would be a perfect solution for scaling down. The Last Of US used it on PS4 to get some really nice results. LPV has pretty good quality and is cheap, but unfortunately doesn’t really scale beyond a single light source, so not the best fit for UE where we try to keep amount of features and complexity manageable through building more general solutions.

1 Like

nice tech talk. where would the irradiance cache sit in this? it does not have a visualizer. or is the world space probe grid? this is all of it.

i could defo see the leaks happening. yes… i know the limitations of radiance volumes from 3.x blender eevee. they gotta be authored volumes to get the light details where required. and need a couple of traces. certainly not realtime material. yo

It reuses world space radiance cache to spawn a new grid of probes. Each probe stores depth in order to do visibility testing DDGI style and after tracing is converted into irradiance. This thingy replaces final gather, so instead of using ScreenProbeGather it does final gather from world space probes. Basically a DDGI prototype.

Though it was done like 2 years ago or something, so likely nowadays is completely broken :).

2 Likes

The visuals also need to stay consistent across different scalings, just a bit more approximate and less precise between settings like high and epic. Currently, Lumen really lacks the ability to scale below high. It would be beneficial if we could lower it further without making the entire rendering unstable or overly temporal, for instance. Many players still have lower-end setups on Steam (RTX 2060 and similar) and are somewhat left behind in the current state.

That’s why I had thought about Nvidia’s approach with their dynamic probe, while the rendering wasn’t as precise as Lumen, it provided a lot of scalability to run on a wider range of hardware. Even if it’s naturally less precise, it’s not a big issue it’s often an acceptable visual/performance trade-off.

@Krzysztof.N

DDGI was poorly implemented

which means that you need to have really thick walls to prevent leaking,

You and @Daniel_Wright of all people should be aware of the solutions made for this problem.
For GI and baked scenarios.

low-res VSM visibility.

Why are you assuming we want to use VSM? Nanite is 101%+ slower than optimized topology and VSM’s abuse temporal AA/Upscaling smear. This is the MOST voted feedback regarding UE.
Stop ignoring it.

You could also work on baking info the visibility of probes. Baking probes around static objects and then only dynamic objects such as doors darken indoor probes dynamically.

Have you seen Lumen Performance Guide? There we propose a following scalability scheme:

Yes, and medium promotes garbage quality compared to 8th gen world lighting systems and Lumen is too exspensive for the majority of games(which are made of around 70% static objects, not 99% not 30%, 70%)

nor good enough for Lumen platforms (consoles, PC).

The spoltchyness of Lumen is kinda insulting to 9th gen consoles so not sure what you’re trying to infer here? Infinite bounce does not make up for the ghosting, poor normal rejection logic, the noisy AO nor cost in static scenes.

You can’t also increase the amount of probes too much, as their cost grows really quickly due to them being a volumetric representation.

Yeah, that’s why we need a more optimized layout for mostly static scenarios. It’s shouldn’t be volumetric, it should be based on a hierarchy of distance between static geo.

Yes, but it could move towards a solution like Enlighten,

And 3rd party studios use UE to move away from 3rd party tools.
Enlighten doesn’t solve the static room scenario where a window or door floods the scene with light via interpolation.

I would stop bringing up UE’s baked solutions like lightmass since it’s a horrible and outdated system. It’s extremely heavy on memory and has severe leaking issues. Several studios and engine producers during 8th gen moved away from lightmaps in favor of interpolated baked solutions. Quantum break, MGSV, The Division, yet look better and perform better than your suggested medium scalability.

When UE was announced you guys talked about how you guys noticed was was missing in UE titles. Geometry and lighting. Well UE makes both those issues worse. Most games are not fortnite. Most games are static environments(that need darkening from dynamic objects like doors, moving lights etc) and dynamic lights. UE has no system that caters to this scenario that doesn’t harm performance drastically or even match the quality achieved in 8th gen games.

Many players still have lower-end setups on Steam (RTX 2060 and similar) and are somewhat left behind in the current state.

I’m talking about the poor quality being produced on 9th gen consoles which even that hardware doesn’t stand against with.

Thousand of people are sick of the quality being produced from UE’s systems.
Again, in case you missed it, read my suggestion for Lumen’s temporal normal rejection issues: Lumen GI and Reflections feedback thread - #1929 by TheKJ

I’m in contact with developer who will be releasing a cheap(as in FXAA cheap) post process AA that can run beneath the TAA(including DLSS, TSR, etc) values shown there since it doesn’t do well jagged edges.
Please stop assuming devs want to abuse TAA’s issues. This is has mentioned several times in the forums for years and now its officially the most important issues to the voting community

i think you failed on the buzzword there. vsm in radiance probe lighting i think refers to variance shadow maps, not virtual shadow maps. what i can google it’s the coarse shadow/depth info baked into the probe.

baking static probes is basicly what lpv, irradiance volumes and/or the volumetric lightmap is. updating those is maybe not hard but compute intensive and you may still get artefacts. cause the grid resolution is limited, the grid - if not authored - is bound to the world grid, directionality is not there but matters in many case and getting a good average look needs balance and propagation between probes. was all explained by krzysztof.

the whole probe lighting tech is limited, cause you have a static grid size, maybe some grid lods, no detail refinement and rng sampling strategies. cause you can’t shoot tons of rays per frame. realtime GI is a coarse solution and temporal.

your whole instability rant schtick ignores the fact that every realtime gi solution is temporal. even the ea engine needs a couple seconds to “converge” (i call it “stabilise”). you look at static images and say “this looks good” and has some sort of millisecond budget, but you miss the point that it’s a screen shot. in motion this stuff is still noisy. and will continue to do so. and it’s compute intensive as soon as you introduce new data. that’s why it runs all the time and spreads computation across multiple frames. temporal compute load.

somebody at valve® at some point said in a tech reveal “noise is your friend”. seems it’s not yours. endless battle tho.

at some point we may reach stable filmgrain convergence, tho. i’m sure.

2 Likes

I’m not saying they shouldn’t. I’m saying they should resolve without incompetent/abuse TAA such as DLSS or TSR which Lumen for the past 3 years in dev time still cannot do. Nor can Lumen handle 2x subpixel jitter without breaking into a flicking mess.

You’re also completely wrong. SVOGI, radiance cascades, and you’re not exactly defining what “realtime GI” means. MGSV uses large SH and accumulates lighting in real time, The division computes bounce light in real time. Realtime does not mean shooting optimization and efficient caching in the foot.

you look at static images and say “this looks good”

If that where true, I would love TAA and DLAA. But no, I’m the only tech influencer showing how poor motion looks in games nowadays. I literally just posted here talking about the motion instability from Lumen regarding subpixel jitter in motion.

i think you failed on the buzzword there. vsm in radiance probe lighting i think refers to variance shadow maps, not virtual shadow maps.

Fair enough, though I should continue to advocate for Lumen’s independence from Nanite as they have stated they are not focusing on doing so.

back to TAA rant? ohh well… -_-

this is a whole different topic away from lumen.

in general… deferred rendering computes raw vaules per pixel. there’s no easy way to antialias that. you do frame disassembly. you should know what a raw output buffer looks like. now think about how you would code a shader and do some math to filter that. not just complain about what’s not working in your opinion. do it better.

No its not. Even the source of Lumen can argue with that.

there’s no easy way to antialias that.

It’s called hybrid anti-aliasing. I’m not anti-taa, I’m anti-garbage TAA that ghost, smear, or kill performance and that’s been done before too outside of unreal.

This is actually how Unity does it - Adaptive Probe Volumes are doing their job, placing grid-aligned probes only near geometry (with option to shift/remove probes that are inside walls so they won’t mess lighting) and very sparsely in between. But amount of issues and limitations in practice is … ouch (baking time and memory, lack of precision (light leaks; and in general the way probes with light from outside seprarated from those inside buildings - it’s one screen-space hack, often failing at glazing angles), need to manually place and arrange reflection probes and deal with their overlapping/visual bugs (they can perfectly work only in perfectly square rooms/buildings), etc):

If anything, after dealing with APVs + baked reflection probes for a while I became more forgiving for Lumen artifacts & limits because baking/adjusting stuff is so frustratingly time/effort consuming…

1 Like

I really feel like anyone who has spent a considerable amount of time with any other realtime GI will inevitably come to the conclusion that Lumen’s limitations are well worth it for most games (and Nanite as well)

2 Likes

Yep. Everything is a trade off in quality, artist time, limitation and performance. Unreal really doubles down on getting rid of limitations and reducing artist time. With todays game market not every game developer is in a position where they can have 400 man teams working 5+ years on games. So epic is making the right bet here.

3 Likes

Hi @Krzysztof.N ,

Will this (or something similar/equivalent) come natively with Unreal 5? Any rough estimation, like a year, a lustrum? Anything would be hypeing!

Thanks!

5 Likes

@Daniel_Wright @Krzysztof.N

Stop making Lumen GI cost more if your not even going to fix the issues everyone is talking about(tech influencers, developers outside this forum, consumers etc). Noise, splotches, unstable disocclusion.

Second time I have to see a performance downgrade for effect system that is already super slow(compared to 8th gen GI for open worlds) and overly abused in the industry.

Testing 5.5.0 with Lumen and VR. Seeing some glitches only in the right eye (reflection glitches?!)
See video. Seems like a bug to me. Must find a way to replicate this but I expect I’m not the only VR user running into this.
In the video I’m switching between GPUlightmass streaming levels and Lumen streaming levels.
Edit: see the fix two post below this one

1 Like

Those glitches exist for years now, unfortunately (no QA testers for VR…?). If I remember fine, yeah, they were happen in the right eye and in reflective surfaces, opaque or translucent. I called it the chirtmass lights bug. I even reported it, but ignored, as usual.

In 5.4 and 5.1 I don’t have glitches like these in my projects. I noticed when enabling Megalights the color of the glitches seem to change strange enough. Still trying to see a pattern and create some way the people @Epic can replicate this.

Lumen + VR is really a game changer for my work (architect) so If this could be fixed that would be great!

Edit: The glitches disappear when using the console command sg.ReflectionQuality 3. Any value lower than that and you get these colored glitches
Edit 2: Sidenote - don’t use instanced stereo or the reflection in one eye will be different

2 Likes


@Krzysztof.N

So as of 5.4 DX11 SM5 SWRT Lumen GI worked perfectly … the use case is to be able to spin up thousands of instances on pixel streaming without huge aws GPU G5 cost and allow multiple sessions per instance. Its also the last hidden gem to squeeze an “optimal” quality rendering yet still hit really responsive frames where it feels like the majority of ppl maybe didn’t know this optimization path even existed [WELL WE DID AND WE TOOK ADVANTAGE :rofl: ] (you asked what the benefit was) Our workflow entails traditional LOD groups, non nanite DX11 SM5 with lumen SWRT GI and we maintain 90FPS or higher with 4 sessions shared on average on very low spec AWS instances in our fleet. Not needing to worry about nanite software rasterization, WPO limitations, SM6 memory footprint, HWRT GPU instances or due to being a chunk loaded application not needing to worry about nanite mesh sizes made this a win, win, win all the way around.

We also have DX11 SM5 Lumen projects running on APUs and Steamdeck and it was a hidden gem for GI and awesome performance for older gen hardware (GTX cards even) I go over a quick profile between 5.4 DX11 with lumen and DX12 (SM6) as required in 5.5 and its over a 20 frame loss in performance at native res and default settings on both (On a RTX 2080ti) the falloff is far greater on lower RTX cards like the RTX 2060. I honestly think marketing and awareness could have been better here instead of ppl focusing on the heavy lift of DX12 and SM6 … communication could have been to fall back to DX11 and SM5 to get some of that performance back. Outside of the little bit of extra chaos thread cost it was pretty close to UE4.27 perf not including the lumen gathering cost which wasn’t bad at all if you used global vs detail. TSR or FSR also working here was icing on the cake to push very very low end cards over the 60fps mark. Trust me it works!

I go into much better detail in this post between each model and rhi.
Summary: Its about a total loss of 123% of the initial starting 4.26ms on a RTX 2080ti if we look at UE5.5 alone going from dx11 sm5 to lumen and 110% loss of performance in something that ran well over 120fps in 5.4 with lumen SWRT enabled.
Lumen no longer working in UE5.5 DX11 SM5 - Development / Rendering - Epic Developer Community Forums

Also a little constructive feedback… As well as a plea to please reconsider this decision? This is going to literally block us from moving past UE5.4 unless we want to take on the massive task of pulling support forward ourselves in 5.5 source (which we likely won’t). A change like this that isn’t “announced”, included in patch notes or have some official documentation other than ppl digging through forums is a little rough to swallow. Please don’t take that the wrong way … I’m just asking for a bit of empathy. No different than I know that maintaining multiple renderers across desktop and mobile is a huge burden… and one we are extremely grateful that you guys are wrangling. Just like … A little better heads up would be awesome especially when the official release notes and docs still seems to make you believe SM5 Lumen is still supported.

7 Likes