Lumen GI and Reflections feedback thread

Hi @Krzysztof.N ,

As, If I’m not wrong, you are related to Path Tacing too and/or maybe Ray Tracing and translucency somehow, I hope I can report this here, to you: I have found that PT and RT translucency seems to be broken in 5.5 and 5.6. If you enable any of both (PT or RT), translucents (I tested glass) will be just invisible.

PS: r.SetRes fullscreen also seems to be broken when using a higher resolution than screen’s native (AKA ‘supersampling’), in both 5.5 and 5.6 too. I hope your colleagues realize it, but if it turns out like when they didn’t notice the issues with packaging and the tons of users complaining about it… (still not solved in 5.5.1 since Preview) :sob: Please, if you could report it to someone, would be incredible :pray:

EDIT:
Testing today the Dark Ruins sample, I have noticed some extreme noise in some reflections. It reminds me the kind of noise of MegaLights, but doesn’t seem to be related, so must be Lumen ones:


(Boosted by TSR, but not only).

Thank you!

Took the liberty of cutting a video going over targeting older hardware with Lumen GI. I also gave some bonus content on how to drive cost even lower and potentially targeting very low-end devices like the switch or PCVR.

4 Likes

Hello,

I’m still experiencing a lot of issues with Lumen being very unstable, especially with trees that partially block the player’s view.

Whether I disable HZB or not doesn’t make any difference. I’ve tested various Lumen-related console commands, but nothing seems to help. It really feels like Lumen is only good at rendering empty terrains with no occlusion, which is quite frustrating.

Here’s the YouTube link in 4K. With the encoding and the lower resolution of the forum’s video player, the issue may not be entirely visible: https://www.youtube.com/watch?v=0MQ5c_tU1JY

Of course, removing the tree entirely from Lumen’s calculations fixes the issue, but it also completely disconnects the tree from the scene. And if we have to do that for every object that occludes the view like a tree with a building behind it in the distance then Lumen’s usefulness becomes seriously limited, even pointless, especially compared to older systems like RTXGI that used grid-based probes.

Disabling all Lumen downsampling also fixes the issue, but it makes Lumen extremely resource-intensive taking over 30ms on its own. At that point, it’s heavier, less efficient, and offers lower fidelity than Nvidia’s ReSTIR GI.

It’s really unfortunate that Lumen doesn’t address these issues. This problem seems to have been present from the beginning likely inherent to its calculation method leaving no choice but to seek an alternative, more efficient external GI solution that doesn’t suffer from occlusion problems, handles foliage better, isn’t so resource-heavy, and doesn’t rely on massive downsampling and heavy temporal techniques. It’s a real shame and far from Epic’s initial promise of “We make the tools, you make the game,” especially when we end up having to use third-party solutions or downgrade visuals.

In this second video, screen tracing is completely disabled, downsampling is set to 16, TSR is pushed to its maximum with a 200% history, all running on Lumen hardware at native 4K. And honestly, the results are really not good.

I’m not able to reproduce this… so I don’t think it is.

A tree in the indirect lighting, with a large building in the distance this occurs with both Lumen hardware and software, although it’s obviously more pronounced with the software version.

Occlusion and de-occlusion look absolutely terrible on these types of elements, which of course occur frequently in a game set in an urban environment.

Here’s a top-down view of the scene’s setup at this location on the map :

In this segment, still using Lumen in hardware mode, downsampling is set to 16 at native 4K with TSR, and Lumen’s screen tracing is disabled.

I removed many of the tree’s branches, but it obviously looks worse once you add back all the other branches and leaves.

I was able to reproduce the issue with a simple lighting pillar and a piece of a bus shelter as well. No AO in the scene, no screentrace, still in 4K native, Lumen hardware, etc.

Setting r.Lumen.ScreenProbeGather.DownsampleFactor to 4, for example, reduces the issue significantly, but we’re at around 20ms in 4K on a 4080. In other words, it’s slower than Nvidia’s algorithm with ReSTIR GI while being less accurate, which isn’t viable.

Maybe I just don’t know what I’m looking for? Or the geometry of my scene isn’t complex enough, but I am not seeing any excessive disocclusion issues in my test setup, at least nothing on par with yours.

This is using Lumen in software raytracing, in global tracing mode. Screen traces on, TSR as antialiasing, motion blur disabled. Hardware raytracing looks better but the goal was to create the worst case scenario.

Unfortunately I cannot record at 4K…

1 Like

@Arkiras you have it too but your ao setting/color is a lil toned down. humen eyes perceive brightness change better then color changes. pseudo optics at the limits.

i repeat myself, but it’s kinda normal with taa and tsr. at that distance difference and the speed, how fast the disocclusion occurs it’s not able to recover or recompute the lighting quickly enough. you’d have to lower the lumen update rate. maybe change AO update rates and may loose performance doing so, cause you shorten the temporal range and you have to compute more per frame.

Here’s a zoomed, higher contrast version… I mean… I guess there’s some disocclusion artifacts but it seems remarkably minor to me.

And here it is with antialiasing disabled, to eliminate AA ghosting as a potential source:

The Lumen update time has already been reduced, but it doesn’t make a significant difference. Even increasing the update speed doesn’t resolve the issue. The only noticeable improvement comes from reducing the downsampling. Naturally, this is much less noticeable when running at over 60 FPS.

I agree that likely the only true fix is improved spacial resolution (both for lumen and the image), and a mitigation but not fix is higher frame rates, as artifacts will be less persistent before clearing up.

That said, there may be something you can do with the geometry/asset to get it to play nicer. The artifact appears at it’s worst to me on the very thin vertical details.

Maybe at problematic distances you could swap to a lower LOD that represents those details with a texture.

I am strongly against the introduction of mesh swapping at specific distances, as it would not only be noticeable (defeating the very purpose of Nanite, which is meant to provide seamless LODs across the board) but would also result in an incredibly cumbersome workflow. Additionally, issues with occlusion and de-occlusion are very noticeable even when close to the occluding object, making this solution unappealing and ineffective at addressing the problem.

The only options seem to be increasing Lumen’s internal resolution dramatically inflating computation times and sacrificing performance or completely excluding trees from Lumen’s calculations, which would result in a total disconnect between them and the environment.

Alternatively, expecting everyone to run at 60+ FPS is unrealistic given the system’s heavy demands and the current hardware most players use.

There simply isn’t a truly viable solution to this. RTXGI never had these issues, but NVIDIA abandoned it in favor of ReSTIR GI, which is 500 times more resource-intensive. Enlighten is prohibitively expensive, and baking lighting is impractical for very large maps with time-of-day systems or similar complexities.

rtxgi has other issues krzysztof explained in earlier discussions. grid resolution, probe resolution and directionaility of the probe volumes. you can do a map, build out of squares and cubes. maybe put some trees in there to catch a glimpse of the gi probes, but as soon as something is not straight or very thin geometry and misaligned on the probe grid you get what i’d call “probe bleed” artefacts. the solution to that is authoring the probe placement, but nobody wants to do that.

(i’ve played with and seen different gi solutions since half life 2. there’s no perfect solution(, yet).)

Of course, grid-based probe systems are far from perfect, but they still offer a generally cost-effective solution. In fact, I believe the latest Indiana Jones game also relies on this approach. It’s highly efficient, even if not flawless (though Lumen itself is far from perfect). It always comes down to finding the right compromise, and in my case, I would have no issue accepting the trade-offs of a grid-based probe system. It’s a tried-and-tested method with well-documented strengths and weaknesses, successfully implemented in countless games like Cyberpunk, The Division, and many others.

It also enables pre-computations, interpolations between various scenarios, and many other features that Lumen currently cannot handle and likely never will considering the industry’s probable shift toward something similar to ReSTIR GI in the future. Unfortunately, most player PCs struggle to keep up, leading to severe artifacts and noticeable ghosting, as recently seen in Stalker 2.

This is expected when relying on upscaling, which reduces the internal resolution, combined with Lumen being heavily downsampled as well. The end result is poor, especially for players who are still predominantly on 1080p setups.

There should at least be a probe system, like those used in other engines, based on a grid with volumes, similar to RTXGI (or even attached to the player, as it was possible with RTXGI to capture details more accurately in real-time). This would allow for density control based on the desired outcome, be performance-efficient, support both pre-baking and real-time calculations, offer various settings, and enable interpolation.

Currently, the choices are limited to either no GI, resulting in a very flat look, CPU-based baking, or Lumen with its numerous temporal issues.

1 Like

somebody correct me if i’m wrong but… afaik and checked the visualizers… lumen is grid based too for doing the basic distribution of light. the surface cache handles fine grain hits on the walls and screentraces handle far distances where the cache is not available and handles objects that do not have surface cache representations.

screentraced objects are the most prone to temporal artefacts. be it out of range objects or skeletal meshes. it is what it is. you have motion vector reconstruction, but you can’t reconstruct what you didn’t see in the last frame. the whole motion disocclusion thing. and maybe offscreen objects, that enter the frame quickly, will have those.

Even without the screen traces pass as shown in my video, I still have issues.

disable ao and test it. that’s screenspace too.

Already done in my last video, it doesn’t change anything :slight_smile:

Aside from downsampling Lumen to a very high level (like r.Lumen.ScreenProbeGather.DownsampleFactor 0 to 4), it’s true that it fixes the issue, but the performance cost is huge higher than both ReSTIR GI and DI combined.

1 Like

i guess i’t have to find or model some brutalist architecture to get this geometry detail and test.

1 Like

Haha, don’t worry about it. :slight_smile:

Worst case, I’ll add an option to manage Lumen’s downsampling in the settings. It really bothers me to have to do this, as players with mid-range PCs will face lighting issues, while those with high-end PCs capable of handling heavier calculations won’t have any problems.

It’s a real shame, but that’s just how it is when you’re stuck with technological choices that don’t make sense for your own project.

It reminds me of this post:
https://www.reddit.com/r/UnrealEngine5/comments/1ho9ija/whats_with_the_shadows_in_the_grass/

Maybe you could create an almost empty project isolating the issue inside it, so other users would be glad to test it.