Working on a project that deals with environments at multiple scales–interior room scale and also miniature scale (i.e., metre-scale vs. centimetre scale) with various levels of camera zoom and aperture settings to represent both. Nothing is particularly unusual about this–Demeo and Inscryption both do the same thing in Unity, for example, in representing tabletop game pieces within a larger 3D room. The problem I’m running into is that Lumen seems to have hard minima on the size of objects whose cards it will admit into the surface cache, and these are in world units rather than in screen-size units. This means that when I am zoomed in very close to very small objects, they are missing from the surface cache completely and don’t affect GI properly. Here’s what I’ve tried:
- Adjusting Lumen Scene Detail in post-processing settings. The UI caps this at 4, but I think its internally clamped at 8 (LumenSceneRenderer.cpp ln 2120, but I haven’t done too much digging so I don’t know if this is the only place it’s clamped). There is no setting that will cause a 5cm chess piece to find its way into the cache, even if it’s currently zoomed to the size of the screen and visibly affecting nearby objects. I’m also not sure if this has any other knock-on effects, since a quick engine code search didn’t reveal anything, but people definitely seem under the impression from reading forum posts that Scene Detail somehow degrades performance more seriously.
- Scaling up the entire scene 10x. This caused small objects be added to the surface cache because they were now above the minimum world unit size, but working in mm units caused numerous other problems with light falloff, camera settings, physics settings and, confusingly, lumen itself, which began to produce blotchy RGB lighting on interior walls (looked like some kind of sampling issue, but I didn’t dig), so this seemed like a no-go.
- Crying. This achieved nothing but I felt incrementally better.
I did a ton of testing and also came to the following observations:
- If an object’s size on import is too small, it will be yellow in the surface cache view (i.e., culled) no matter what its world scale is. I assume that’s because small scale objects don’t get lumen cards generated in the first place. Importing centimetre-scale objects at metre-scale fixed this, but they were still culled when scaled down small enough, so no dice.
- I experimented with surface cache vs. hit lighting on small scale objects. Screen traces still seem to work a-ok even when surface cache is used, but predictably were biased by on-screen results, such that when the view was rotated, bounced light would swim around as in SSGI. Hit lighting worked perfectly but I understand it’s expensive. Unfortunately the Ultra 9 285K and 5080 RTX in my dev box will eat absolutely anything you throw at them and spit out perfect 8.3ms frames regardless (and my min spec box isn’t here yet), but I don’t think shipping with only hit lighting would be viable for typical consumer hardware. Hit lighting was also weirdly noisy, with more fireflies, and produced a visibly worse result on metre-scale objects, which seemed a bit counter-intuitive for the more expensive option.
- Threats and cajoling have no effect on the behaviour of the Unreal Engine. Unclear if this is a feature or a bug.
So I’m sort of stuck here, and hoping someone has some really good advice, or knows the secret r.Lumen.MakeLumenBehave cvar (which needs to be set to 2 for some reason), or has some option I haven’t thought of. Because right now I evaluate my options as follows:
- Ditch lumen and use SSGI and DFAO. I was planning to provide this option anyway with a separately tuned lighting setup for older hardware, so instead I’d just cut out lumen entirely. The problem is that lumen looks really good–when it’s not causing problems.
- Fork the engine and unclamp Lumen Scene Detail. This feels a little dangerous since there are small meshes in the metre-scale scene that contribute little to GI and probably should be culled.
- Fork the engine and add a “force into surface cache” flag on meshes, populated through the lumen surface cache task data or whatever to just give me more fine-grained control, or…
- Fork the engine and modify the entire surface cache generation to do what it probably should do in the first place, which is use projected screen space size for cards/primitives rather than absolute world units. I know this is probably the correct answer but, god… I would rather do almost anything else with my life.
- Give up game development and become a monk. The only downside to this is the tonsure is a little frumpy, but I could probably get over it if it meant I never had to look at another computer again.
Any thoughts or advice?