Lumen surface cache scale woes

Working on a project that deals with environments at multiple scales–interior room scale and also miniature scale (i.e., metre-scale vs. centimetre scale) with various levels of camera zoom and aperture settings to represent both. Nothing is particularly unusual about this–Demeo and Inscryption both do the same thing in Unity, for example, in representing tabletop game pieces within a larger 3D room. The problem I’m running into is that Lumen seems to have hard minima on the size of objects whose cards it will admit into the surface cache, and these are in world units rather than in screen-size units. This means that when I am zoomed in very close to very small objects, they are missing from the surface cache completely and don’t affect GI properly. Here’s what I’ve tried:

  • Adjusting Lumen Scene Detail in post-processing settings. The UI caps this at 4, but I think its internally clamped at 8 (LumenSceneRenderer.cpp ln 2120, but I haven’t done too much digging so I don’t know if this is the only place it’s clamped). There is no setting that will cause a 5cm chess piece to find its way into the cache, even if it’s currently zoomed to the size of the screen and visibly affecting nearby objects. I’m also not sure if this has any other knock-on effects, since a quick engine code search didn’t reveal anything, but people definitely seem under the impression from reading forum posts that Scene Detail somehow degrades performance more seriously.
  • Scaling up the entire scene 10x. This caused small objects be added to the surface cache because they were now above the minimum world unit size, but working in mm units caused numerous other problems with light falloff, camera settings, physics settings and, confusingly, lumen itself, which began to produce blotchy RGB lighting on interior walls (looked like some kind of sampling issue, but I didn’t dig), so this seemed like a no-go.
  • Crying. This achieved nothing but I felt incrementally better.

I did a ton of testing and also came to the following observations:

  • If an object’s size on import is too small, it will be yellow in the surface cache view (i.e., culled) no matter what its world scale is. I assume that’s because small scale objects don’t get lumen cards generated in the first place. Importing centimetre-scale objects at metre-scale fixed this, but they were still culled when scaled down small enough, so no dice.
  • I experimented with surface cache vs. hit lighting on small scale objects. Screen traces still seem to work a-ok even when surface cache is used, but predictably were biased by on-screen results, such that when the view was rotated, bounced light would swim around as in SSGI. Hit lighting worked perfectly but I understand it’s expensive. Unfortunately the Ultra 9 285K and 5080 RTX in my dev box will eat absolutely anything you throw at them and spit out perfect 8.3ms frames regardless (and my min spec box isn’t here yet), but I don’t think shipping with only hit lighting would be viable for typical consumer hardware. Hit lighting was also weirdly noisy, with more fireflies, and produced a visibly worse result on metre-scale objects, which seemed a bit counter-intuitive for the more expensive option.
  • Threats and cajoling have no effect on the behaviour of the Unreal Engine. Unclear if this is a feature or a bug.

So I’m sort of stuck here, and hoping someone has some really good advice, or knows the secret r.Lumen.MakeLumenBehave cvar (which needs to be set to 2 for some reason), or has some option I haven’t thought of. Because right now I evaluate my options as follows:

  • Ditch lumen and use SSGI and DFAO. I was planning to provide this option anyway with a separately tuned lighting setup for older hardware, so instead I’d just cut out lumen entirely. The problem is that lumen looks really good–when it’s not causing problems.
  • Fork the engine and unclamp Lumen Scene Detail. This feels a little dangerous since there are small meshes in the metre-scale scene that contribute little to GI and probably should be culled.
  • Fork the engine and add a “force into surface cache” flag on meshes, populated through the lumen surface cache task data or whatever to just give me more fine-grained control, or…
  • Fork the engine and modify the entire surface cache generation to do what it probably should do in the first place, which is use projected screen space size for cards/primitives rather than absolute world units. I know this is probably the correct answer but, god… I would rather do almost anything else with my life.
  • Give up game development and become a monk. The only downside to this is the tonsure is a little frumpy, but I could probably get over it if it meant I never had to look at another computer again.

Any thoughts or advice?

What do you mean by “admit in the surface cache”? Do you mean the object is not visible at all in the Lumen debug views or that it is visible, but colored yellow/pink in the surface cache debug view?

It’s coloured yellow in the surface cache view, which unless I’m wrong should mean it’s valid, but being culled, right?

Yellow means the surface cache was culled yes, but that is based on the size on screen. You can force Lumen to always draw the object into the surface cache by enabling “emissive light source” on the actor.

The actor can still be culled from the hardware raytracing scene though, if this happens it will disappear completely from the debug views. There’s a raytracing group priority setting which is ostensibly supposed to remedy this but doesn’t seem to work for me (on Vulkan/Linux at least). If you’re using software raytracing, the global distance field has a minimum resolution it can resolve which if I recall correctly is around 20cm and there’s basically nothing you can do about that.

If you’re using Lumen software raytracing there is at least 2 cvars that i know of “r.Lumen.DiffuseIndirect.MeshSDF.RadiusThreshold” and “r.LumenScene.SurfaceCache.MeshCardsMinSize”. Second one maybe will effect HWRT path, i don’t really know…

But they for sure will decrease performance if you lower it too much and set-up this way for a reason.

1 Like

Hm, so it seems like the Emissive Light Source switch prevents the object from being culled at any distance if its absolute world size is large enough, but very small objects are still culled (appear as yellow) regardless. That threshold looks to be somewhere around 5-6 units. It’s a little hard to judge just by messing around with static mesh scales.

It’s 4cm. You need a 4cm object to be visible in the Lumen scene?

Huh, ok, so the combination of both the Emissive Light Source switch and r.LumenScene.SurfaceCache.MeshCardsMinSize actually seems to work, which suggests these meshes are being culled on multiple criteria. I wonder what the perf implications of that are, though.

If you only need it when you’re zooming in on an object then you can probably set it dynamically for the one object and it will be fine. If you need it to affect the entire scene it may become a problem.

Yeah, that’s my hope. I’ll keep an eye on it. Thanks! Worst case, I can still do a custom engine build for more fine control if we need to.

1 Like

I think you need to figure out if your problem is screen-space or world space culling. By the looks of it there is a multiple culling stages.

r.LumenScene.SurfaceCache.MeshCardsMinSize - should be world space.

r.LumenScene.SurfaceCache.CardMinResolution - should be screen space.

But there is a lot more cvars. Take a lot at r.LumenScene.SurfaceCache.*

And meshes could be culled from SDF completely with r.Lumen.DiffuseIndirect.MeshSDF.RadiusThreshold in world space too.

1 Like

Well as mentioned there’s the global SDF at issue also, which is pretty much irresolvable without performance-destroying global SDF resolution increases or massive engine rewrites. I think in this case it’s acceptable to ship with lumen only available with raytracing, given the complexity of using mixed scales anyway. We’re four generations of raytracing hardware in, and even someone on a 2000 RTX series card would likely prefer the fallback anyway over a lumen-powered slideshow. I’m hoping this will eventually ship with megalights anyway, where RT is non-negotiable, and that way I only need to support two lighting scenarios (raytraced lumen + megalights / non-lumen fallback).

if you have static scenes there’s another fallback you could utilise. stationary gi. not full shadow lightmaps but just gi. can be baked inside or outside of unreal, and mixed with megalights “screenspace” shadows and raytracing or distance field reflections.

there are so many options these days and you can switch most of it on the fly or swap out shaders or assets.

1 Like

Good point! Unfortunately the small scale scenes here use almost entirely dynamic assets and dynamic lighting, which is why I was hoping to leverage lumen for them with SSGI and DFAO as fallback. The upside to the project is the environment is relatively contained and easy to control for bespoke solutions (worst case: if I need to manage a lot of things by hand I can), but the downside is there’s very little opportunity to bake anything.