Lumen GI and Reflections feedback thread

I’ve already tested with fully opaque trees, and it doesn’t prevent the occlusion/de-occlusion issues. It’s not even specific to trees, it happens with anything that occludes the view and a building farther in the background.

Of course, it’s more noticeable with trees since their geometric complexity partially obstructs the view. Fully opaque trees easily drop the framerate by 10 to 15 FPS for me, so the cost is far from negligible. Nanite isn’t designed for this kind of scenario yet, but that’s another topic.

I think Sebastian is suggesting that you try rendering the tree with a translucent shader instead of an opaque shader, but then just set the opacity to 1 or nearly so (also would need to manually enable depth and velocity).
This will force the tree to render in a seperate pass, unfortunately it opens up a bunch of other cans of worms, like depth sorting, overdraw, and the lumen translucency volume. But perhaps it could prevent dis-occlusion.

I suspect this would be more problems or trouble than it’s worth, but I guess it doesn’t hurt to try.

That would make the mesh incompatible with Nanite, so the rendering cost would be very high. Rendering non-Nanite meshes with VSM would blow up my performance budget. It’s absolutely not a viable option.

I’ve exposed Lumen downsampling as a setting, which significantly mitigates the issue. It comes at the cost of reduced performance, but at least the option is there. :slight_smile:

If you’re using alpha masking and/or WPO on that tree, it’s already using nanites slower pixel programmable path and may not actually be much faster than the standard rasterizer.

In my projects, I’ve often found nanite foliage to perform worse than non-nanite - even with VSM.

However translucent shaders are more expensive than either so I suspect it would be slower regardless. That said, I think it still could be worth profiling before concluding it is not viable.

The trees I use are heavy in triangles, and even though Nanite isn’t fully optimized for them yet (especially the masked part, while the opaque part of the tree is handled super well), they are much heavier with classic LOD anyway already tested and tried a long time ago :slight_smile:

I did a quick test. Indeed when the tree is transparent - the smearing is stopped, or drastically reduced. I used “TranslucentGreyTransmittance” - but maybe there’s a much better way to achieve 0.001 % transparency for an object.
You can also noticed the effect in the wireframe portions.

My computer is old and slow, so you should probably disregard the FPS displayed on the screen. Even though the video with the transparent tree has more FPS, compared to the opaque tree with the smearing.

The first one is the opaque
opaque.mkv (8.0 MB)

the transparent

transp.mkv (10.3 MB)

How do I embed video files ?

1 Like

Have you tried incresing the Bounds Scale of the occluded meshes? To something like 5-10.

Yes, it doesn’t make much of a difference. Only downsampling to 8 significantly reduces this occlusion/de-occlusion issue, even if it’s not perfect. It’s from 4 and below that it becomes almost invisible, but at 8, it’s already greatly reduced. A keen eye will still notice it, of course.

I created settings to manage this under the name “Global Illumination Stability,” starting with Auto (using a custom algorithm to calculate appropriate sampling based on internal resolution/window size), and then offering options for 32, 16, and 8. It’s not perfect, but it gets the job done. Maybe if Epic improves this someday, or if more interesting GI solutions are developed and become available, I’ll switch. But for now, I’m not going to overthink it.

2 Likes

Out of curiosity, is your scene large enough where generating an HLOD makes sense? Just noticing the lack of surface cache coverage.

There is indeed scene caching at a greater distance with r.LumenScene.SurfaceCache.MeshCardsMergeInstances.

The environment is fully ISM with a preprocessing step during cooking that merges all identical elements together, significantly optimizing memory usage as well. I’m not fully using World Partition streaming for the environment, so no HLOD, to avoid all the stuttering issues caused by streaming and actor initialization that currently impact most Unreal games. (The AI and other resource-intensive but lightweight elements are streamed but cause no stuttering since it’s minimal, unlike streaming the entire environment. ) :slight_smile:

Otherwise, the environment is planned to eventually measure 500 meters by 1.5 kilometers. Currently, it’s 500x500, with only a part of it completed.

I just tested it, I don’t think it actually needs to have any amount of translucency, it just needs to be a translucent blend mode. I set “output depth and velocity” to true so that temporal effects are properly handled and even black leaves against a white building have zero ghosting and dis-occlusion. I won’t pretend there aren’t cons to doing this but I guess it does “solve” this one problem lol.
For one, frame-time cost of translucency is pretty high, probably higher than whatever it would cost to render that mesh as non-nanite. On my 3080, overdrawing the full screen at 4k costs about 0.4ms when using the expensive surface forward shading per layer. So a tree (or trees) with many layers of overdraw would probably consume an unreasonable amount of frame time.
This is far worse than the cost of switching a mesh to non-nanite, as it is much easier to manage quad overdraw on opaque and masked meshes - even densely detailed ones.

…But under specific controlled circumstances, I think it can be a viable workaround, especially if you can use a cheaper translucency modes (cheapest cost 0.1ms), and be sure that multiple surfaces don’t overlap.

But is that translucency option the only option to prevent an object to obscure the polygons behind it ? I’m guessing the engine has to have other options too.

And since the “tree” causes the disappearance of the polygons behind it - as you can see in the picture - and this is the main cause for causing those smearing and the redrawing of shadows, I wonder if one could grow inward the margins of the object-tree - so by the time the tree moves into a new position, the shadows have already finished redrawing ?
Like in the next picture, I’m not sure if it’s clear what I’m saying, it’s not very well phrased.
The tree will still optimize the objects behind it, and will make the polygons disappear, but not starting quite right on the edges.

It is the only straightforward option due to the nature of the depth buffer. Unreal only draws the nearest pixel of opaque objects to avoid drawing unnecessary stuff. In theory you can make a multi layered rendering system so that some obscured triangles exist and are theoretically accessible. I’ve seen this done in a custom rendering engine to make (mostly) artifact free motion blur for a research paper. It would take significant engine modifications and require an additional gbuffer for each layer you want to access. Not sure if it’d be real time viable and it would be a lot of work to find out.

Translucent on the other hand is by its nature rendered separately and composited on top.

So in other words, yes I think your idea would work on a theoretical level, but I don’t see it working on a practical level. And fast enough dis-occlusion would still cause artifacts once you run out of padding.

Try this: In the world settings, change the global distancefield view distance to a higher number. It defaults to 20k (200m). Keep in mind that there are pros and cons to raising it.

Unfortunately, since it appears they’re using hardware RT, I don’t know if changing the distance field would make a difference, otherwise I would agree with your solution.

HWRT still uses screen traces as well, as far as I know. I just know that that variable fixed some of the issues I was having on larger scenes that were bugging out. It’s a quick variable to test out that may or may not do anything in their case.

By default, it does, but global distancefield isn’t a screen trace method, it’s a world-space method using SDFs. I’m very curious that it does anything at all in those situations, unless you’re somehow layering over DFAO or another distance field effect to your scene?

And you can disable screentraces from HWRT in the PPV, as I’m sure you know. Or upgrade the short-range AO to use hardware RT as needed.

All flickering problems is caused most by AA methods for example TSR.It add frequency filter which change lighthing to strobe. It is visible when Mixing everything when trying to achieve more dark scenery. Adding more effects that make disproportions between light and shadows add noise to the scene - VSM and raytracing(distancefield) Illumination,shadow methods also add that type of filter - which also apply similiar effects. The question is how to fix that pulsar.
Flicker

Is there any way in Unreal Engine 5.5 whether through a console command or a direct modification in the engine to completely remove the contribution of emissive materials to Lumen? If so, could you please indicate where or how this can be achieved? This is a serious issue, as Lumen handles emissive materials very poorly, quickly adding noise across the entire area, making the lighting highly unstable. I would like to use emissive materials without them affecting the scene lighting, but without having to disable screen traces either.

Previously, I used a workaround by employing emissive translucent materials with an opacity set to 1, as Lumen didn’t account for them back then, which made it an effective trick. However, in 5.5, Lumen now processes these materials, leaving no room for this kind of workaround anymore.

It shouldn’t, as long as you don’t set the screen trace source to Antialiased Scene Color with Translucency, if you leave it at Scene Color it should ignore translucency.