Lumen Metals in Reflections

So I already shared a bit about my reflection experiments in the general Lumen thread, but I didn’t want to keep cluttering it. Thanks to some community suggestions I wanted to post some improvements.

Original Comment:

Thanks to this thread I was able to get Lumen representations in Scene Captures. This addresses one of my main quality concerns. Hopefully we see this in an official update in the future.

A Without Lumen Enabled in capture:

B With Lumen Enabled in capture:

C Pathtracer:

My notes:
As in my last post, the RT feature switch in the shader is enabling falling back to a cubemap for missed screen traces. This allows for convincing metallic objects - even mirror objects - to appear in reflections instead of the matte surface cache or black hit lighting result.

Enabling Lumen for captures further enhances the result in several ways. First, we can observe GI in the fake second bounce. Look closely at the reflection of the sphere within the cube and you’ll see that in Image A the roof overhang is too heavily shadowed. This is easily observed by comparing it to the real-time reflection in the sphere itself right next to it. In Image B, the GI in the capture allows the lighting to match the real-time reflection much better, making the reflected object appear much more integrated into the scene.

You will also see that there is even a fake 3rd bounce, but only for objects that are included in the reflection capture. In this example, the ‘Unreal Material Sphere’ is included and appears to have a 3rd fake bounce. The standard sphere and cube are excluded from the cubemap to represent dynamic objects, so we cannot observe a 3rd+ bounce as we would in the path tracer.

The inclusion of Lumen in the cubemap has allowed the ‘Unreal Material Sphere’ to retain its metallic look even in the fake 3rd bounce, whereas metallic objects in the standard capture appear to only reflect the skybox.

Since it relies partially on cube mapping, of course the result will depend upon the quality and accuracy of the cubemap. This is seen in the slightly mispositioned 3rd bounce due to differences between the capture position and object.

Perhaps the most noticeably absent element to my eye is the reflection of the shadow. Since the shadow is absent in the cubemap, it is absent on the sphere using it as a fallback. A fake shadow could be composited into the cubemap trivially, and in fact I already do apply some subtle masking towards the bottom to fake ambient occlusion.

There are a few other drawbacks of the technique. Obviously it requires authoring cubemaps, but mostly only for areas with high concentrations of metallic objects. Generic cubemaps will probably be good enough for the vast majority of real-time scenes. Cubemap resolution can be shockingly low in most cases. This scene was captured using 128x width.

Because the technique once again relies on emissive, the strength of the effect needs to be tuned for lighting conditions to prevent the metal from appearing to glow in reflections. If lighting conditions can change for a given scene, you may need multiple cubemaps, or at the very least reducing the emissive level to match ambient levels.

Rough metallic objects need more effort to achieve a convincing result, but perhaps a naïve approach would be to simply force lower quality mips to make the reflection appear blurrier. Cubemaps also seem to really struggle with volumetric clouds, but it looks passable enough.

Added shader complexity is quite low, but we do have to include an extra texture sample for the cubemap.

There is still room for improvement, but I hope this helps anyone struggling with metallic objects and encourages Epic to optionally enable Lumen for all captures in a future release.

6 Likes

I truly do appreciate all of this, I’ve been following your documentation @BananableOffense and found it particularly useful for some experimental scenes I’ve been working on. I must admit I’ve had to reread your posts more than a few times to understand them (the cubemap masking is slightly confusing to me but I’m getting there :).

If you are looking for a stress test scene for lumen metallic reflections, I may suggest the SciFi Hallway scene from UE4. I was playing around with it a while ago, and discovered that the dull specular reflections pre-RT art styles love become utterly broken with lumen. Massive disocclusion and view-dependant reflections, among other problems, and I wonder if applying the same system you’ve come up with to a box reflection capture might offer some interesting results.

To restate the obvious, it would be lovely if Epic could include multi-bounce reflections in Lumen. Irregardless of performance implications, that is the one area where RT reflections retain an advantage (in addition to supporting lightmaps). I’ll also note in a more general trend for computer graphics, that compute power is growing faster relative to bandwidth or storage (ray-tracing over texture lookups or lightmaps).

1 Like

I took a peek but was having lots of unrelated issues with the project. I did render out a cubemap though and the dull metal doors looked great in the cube map. I’m pretty confident the technique would solve issues handily.

Definitely. For movie renders, archvis and who knows, maybe even some real time cases having to jump through 100 hoops just to get an approximation of a second bounce is really pretty wild. Hopefully we get a bounce counter like in true RT. And also hopefully we can eventually get a checkbox to automatically fallback to environment captures for reflective objects in reflections :slight_smile: haha

As far as masking goes, here’s a full breakdown. The first and most important is masking out raster/hit trace vs raytracing. This is achieved with the Ray Tracing Quality Switch Node.
Here we can see Green in the “Normal” input and red in the “Raytraced” input.


This essentially allows us to have a completely different material in reflections. At it’s simplest, this can allow us to disable metallic, and enable base color in reflections so we can get rid of the black spots when using hit lighting.
This already represents a pretty big improvement in many cases.

We get even better results by using a cubemap (multiplied by basecolor). Honestly for the vast majority of cases this is plenty good enough. Most sane people could probably stop here.
But we can notice that the reflected sky is too dark if we simply plug the cubemap into the basecolor,

If we plug our cubemap into the emissive instead, we get much brighter reflections. But we also introduce a few other problems.

First, lumen projects emissive into the scene as ambient light. Mirrors are not projector screens, so we probably don’t want it to cast light.
If we use the following custom node text, we can control this.

#if LUMEN_CARD_CAPTURE
return InputA; 
#else
return InputB;
#endif

This is what I mean by masking out the lumen emissive. The left image is without the custom node, where an emissive cubemap might improperly illuminate the scene. The right image has the custom node, which makes the reflections appear emissive, without actually adding any light into the scene. Non-reflections are non emissive.

Note that I’m not using lumen in my scene captures for this particular demo, so the ground color/lighting doesn’t match perfectly.

The next issue we introduce by using emissive is overbrightening of dark areas, which can make the mesh glow unnaturally. We managed to brighten up the sky, but we made the underside brighter than we really intended to. These is where occlusion masking comes in.


The idea here is to try to reduce the strength of the emissive in areas that would be occluded. You can see on the left, the underside of the torus is unnaturally bright. In this case I’m simply using a gradient to shade the underside until it looks more natural. You could also try baking occlusion into vertex colors.

At this point we’ve pretty much got it where we want it. But there are a few more places that need work.

One case is if lighting conditions are dynamic such as day/night cycles. The simplest and most convincing approach is probably to capture a new cubemap every now and then or when a change is detected, but this would probably cause a hitch for a frame or two when capturing. Another option might be to lerp between cubemaps at the cost of an extra texture sample, or simply adjust the overall emissive value. Maybe even a cubemap flipbook, if we’re feeling really weird.

Again, perfection isn’t really the goal. Its just a matter of believability. 99% of the time, especially in real-time, players aren’t going to be willing or able to scrutinize the details enough.

As a final note, this can be used on non-metallic shiny objects too. Like glossy surfaces or even water.

Ah, I see what you mean by masking now, this makes a lot more sense. The ability to substitute in different materials in primary vs. secondary bounces is a very handy one for both art direction and performance, and it makes me wonder if the way Lumen handles emissive lighting might be improved by this means.

And yes, I entirely agree with you, Archvis and renders especially, and a bounce counter certainly. I did a lot of testing with RT reflections when they came out, and I generally discovered the maximum number of bounces needed was five, even in the ‘sphere reflecting a sphere’ case, because the reflection of the reflection attenuated to such a point where it took up negligible room on-screen, and therefore had a negligible cost. An easy fallback would be lovely as well, especially since we know RT reflections already supports it. For Lumen reflection scaling, you could have it evaluate per-hit lighting in the first bounce, and simply show the lumen scene on subsequent ones.

And your tricks with environment captures do remind me of something: Horizon: Forbidden West didn’t have dynamic GI to my knowledge, but they have 16 or so pre-baked Times of day that they lerped between, with SS effects to fill in the gaps. I wonder if one could harness Nanite to do really cheap dynamic reflection captures, so lumen could have something to fall-back on if multi-bounce RT is just too expensive, if that makes any sense.

Also, this is mildly off topic: I would love if UE5 would support reflective caustics at some point. I know it creates some nightmares for denoising, but techniques like Nvidia’s Adaptive Anisotropic Photon Scattering (AAPS) have produced excellent real-time results. I believe it would add a whole new layer to realism that we’re simply not used to seeing in games.

Pretty much what I was thinking of when I was brainstorming options to make this real time friendly. I think a flipbook of a dozen or so baked Time of Day cubes would work pretty nicely because they can actually be astonishingly low res before it starts to get really noticeable. Plus you can easily update every cube using the system from a single material parameter this way.

For sure. In the meantime this cube capture technique can also be used to fake translucent refractive materials with extremely convincing results. In fact, thats how Half Life: Alyx did its liquid and glass shaders.

With Lumen’s emissive lighting, we can actually produce a mildly believable caustic effect. This is done the opposite way of the previous post’s reflection emission masking. Instead of masking glow out of the cubemap, we can amplify the light propagation beyond the rendered emissive value.

This allows it to push extra light into the scene without the actor looking light a light bulb. We can even control the color independently of the cubemap. A mask can be used to control where the light is emitted, as caustics only really appear roughly in line with the light direction. Here’s an example where I’ve just plugged random noise into the emissive and cranked it up.


For an added cost, we can even sample the cubemap 3 times to simulate the natural phenomena where different wavelengths of light have different IoRs, causing light dispersion. You can see this too, as a slight chromatic fringe on the refracted image.

And since I mentioned that a fake shadow could be easily composited into the cubemap, here’s an example. Again, non-lumen captures so the light levels are quite off. But the fake shadow does make the fake reflection look a bit more seated in the scene.


(L) basic cubemap with no fake shadow reflection (M) cube + procedural shadow reflection, and (R) path tracer.

Edit: Another example of the faked caustic, cause its fun…

That is such a smart technique for so many different reasons. First off, the creativity of using a cubemap sample to drive real-time emissive caustics is such an interesting mixture of high and low tech, and a reminder that cleverness is worth far more in real-time than raw compute alone. Second off, I am familiar with Half Life: Alyx’s liquid and glass shading, I was astounded by the realism, especially for a VR game. It’s all just an incredibly clever series of tricks, but the idea of utilizing the double-warped cubemaps to drive reflection and refraction was immensely clever.

While you offered the flipbooks to be real-time friendly, if you wanted to be extremely real-time unfriendly, you could use UE5’s new mirror translucency. It’s shockingly high-quality, although only truly useful under specific art cases. I’m unsure if it would be possible to feed lumen scene data directly into your emissive masking system, but it makes me wonder if you could hack-in real-time caustics using that method.

It does make me wonder if something similar to Control’s ray-tracing implementation might be useful here (I consider the game one of the prime examples of real-time ray tracing, as it blends new tech and old tech very elegantly). For Global illumination, they blend between precalculated probes stored in a sparce voxel grid, which is used for large-scale GI. But small GI (casting rays approximately 2 meters long) is done via ray tracing, achieving a nice blend of price and performance. It makes me wonder if the same thing might be possible for lumen eventually, as a mid-ways between paying the memory price of a high-density probe grid, and the compute price of full real-time GI.

Absolutely. Its so easy to take for granted the old tricks when you can flip a switch and get near photorealism in many cases. We get closer to just being able to brute force our way through all these problems, and maybe one day the hacky ways will truly be obsolete. But until then I’ll keep experimenting.

I hope we can sample the Lumen scene data in the material at some point. Given that the global distance field volume is exposed to the material editor, I suspect it’s technically possible in a similar manner.

No doubt lumen will continue to improve and take inspiration from industry trends.

Speaking of inspiration, and translucency… you gave me an idea.

We can use the Alpha channel of the cube map to carry a translucency mask. This can force the render of the cubemap on top of the true lumen reflection to superimpose objects into the reflection. They won’t line up perfectly, but that might not matter for some scenes.

Here’s a quick test of a glass pyramid being superimposed.


Unfortunately, the bigger issue is this wouldn’t understand if a player or dynamic object blocked the reflection. Here you can see it drawing in front of a sphere which should occlude it. But it might be useful to someone for something so I figured I’d add it.
image

Ooh, that is an excellent idea, lumen scene has some invasive limitations that could be overcame with that . And I agree, lumen will continue to grow and change as the industry’s features evolve. What I note as interesting it that other systems like Nvidia’s RTXGI (which I did some relatively through testing and analysis posting on the forums a few months ago), commits so strongly to pure hardware ray-tracing that it loses many of the advantages other real-time systems posses. Epic’s solution, which is to throw the light-transport kitchen sink at the problem and tune until photoreal, is far messier but generally better in game-ready cases.

Your use of translucency masks coupled with lumen reflections to composite objects gives me an idea: Prey (2017) featured their looking glass technology, viewports that projected different scenes than what was actually visibly happening behind the glass (EG, a fish aquarium that looks real, but is in fact entirely digital). I believe they did it on render targets, but I wonder if something similar could be used to provide real-time effects visible in reflections but not in the main world space.

Also, if distance fields are accessible to the material editor, I wonder if you could wire up the cubemap into a mask that detects distance field occlusion and blocks out the parts of the cubemap that would be blocked, just a thought.

To elaborate, RTXGI was awful compared to lumen. It required manual placement of probe grids that had serious artifacting and visible lines, the GI was dynamic but commanded some relatively substantial costs, and it had effectively a hardcoded resolution in space, unlike Lumen which puts more resources into rendering objects the bigger/closer they are to the player (this is visible especially with SWRT, where they stream in higher resolution SDF ‘bricks’ to improve the lumen scene’s accuracy).

Also, just out of raw curiosity, have you figured out why lumen lighting on small objects gets so grainy? I could understand blurry from temporal accumulation and filtering, but I can’t actually figure out what part of the lighting pipeline is causing it (these are indirect shadows, but those aren’t handled by VSMs to my knowledge). I also can’t find a setting to improve it, even if I red-line everything else. Just wondered if you might have figured it out.

I already tried, and unfortunately it appears the lumen reflections don’t support it. Thankfully simple procedural masks and/or pre-baked AO seem to do a fine enough job. But hopefully the lumen scene eventually supports all material nodes. There are a few other broken ones, too.

As for small objects, not sure specifically what causes it. I assume there is some biasing for samples.

Almost certainly, although I’m surprised it doesn’t resolve after a long enough accumulation of frames. It’s an annoying impact to quality in an otherwise very good system, and especially one with no discernible way to fix it.

By the way, speaking of lumen reflection and material support, what do you think of Strata? I’m still not entirely certain what it is, but it appears to be some support for layered materials/ BSDFs, or perhaps a more coherent physically-based material scheme? I’d love your thoughts, I’ve been following their code progress for a while but haven’t found a specific answer.

I haven’t seen a lot about it, but that’s my understanding. I’m pretty comfortable with BSDF pipelines like in other DCCs but haven’t used it yet.

If I had to guess, it’s probably at the behest or at the very least to appeal more to the non-games side of their business.

I was never a big fan of the shading model system but I understand that mostly had to do with gbuffer limitations and pass optimization.

If the new system can deliver more flexibility at similar performance then I’ll be eager to try it out. In particular, being able to mix different shading models in a single material seems like it could be a possibility? But I do worry about the implications of splitting the materials effort down two seperate paths.

1 Like

I would agree with you, on your note of ‘splitting the materials effort’, I believe it’s being optimized for real-time as well as offline performance, but I think UE5’s ultimate goal is to be the omnibus DCC platform, similar to Nvidia’s Omniverse.

I think you’re right in that Epic is adding support for more complex and nuanced material systems and effects, much more in-line with offline renderers like Blender or Maya. I also agree that Unreal’s shading system was the product of a very resource-limited games mindset that suddenly found itself with the ability to do offline-level features such as ray-tracing. It seems like the feature gap between online and offline is closing to the point of them beginning to blur, and I wouldn’t be surprised if real-time path-tracing (albeit heavily optimized) is available as a serious featureset on next-gen consoles.

That being said, if real-time content creators need to get used to a whole new content creation paradigm, that’s more than likely going to slow adoption down significantly. Lumen, for all of its’ technical inelegance, makes the front-end feature set relatively simple to develop with.

I’ve gotten more info on the Strata questions page I opened, it essentially looks like Epic is trying to create a more robust materials pipeline similar to offline renderers, although it’s using some wizardry that’s going way over my head at the moment (given your material work @BananableOffense you may understand it more than me).

Still, it appears to offer, among other things, multiple layered reflection lobes that interact with eachother with varying roughness and transmission? As a pretty strictly real-time creator (who’s dabbled in very little offline rendering), the robustness seems quite incredible to me.

Yeah, I’ve been keeping an eye on it. I’m intrigued by the ability to customize the bytes per pixel. Theoretically that should mean you can allocate more memory and get more complex materials, although I don’t know what the upper limit for that is in real time. I think this should circumvent the whacky hoops needed in the past to achieve things like the clearcoat normal shading model, which had to encode the normals into an octahedral. I look forward to trying it when its a little more stable.

The clearcoat normal shading model was always a headache for me, and I truthfully don’t understand how it works. I think Strata has the potential to be a very powerful thing, and it brings the kind of robustness that next-gen games will need ( the nightmarish shader graphs needed to hack in certain features give me nightmares to this day).

The upper limit is a good question, as is scalability. If we’re given a straightforward way to reduce memory costs for different platforms, it would make development much more straightforward. Personally, I think we’ll be moving more in the direction of procedural textures in the long run, as compute is growing much faster than storage.

Also, when you say ‘encode the normals into an octahedral’ ,how do you mean exactly? They packed the normal map into some sort of cubemap-like atlas? I’m not very familiar with clearcoat, sorry.

Yeah, I agree that procedural is probably going to be the biggest thing for textures.
Packing normals (or any unit vector) into an octahedral is a way to convert it from a V3 into a V2 that can be turned back into a V3 later. It’s a bit complicated to describe, but imagine you have a unit vector pointing an arbitrary direction. You can project that vector onto a point of an octahedron (similar to cubemapping, yes). An octahedron can be folded/flattened into 2D space. Now that vector is encoded as a point in UV space.
Reverse the operation and you get your vector back.
It’s not too unlike how you can derive the B component of a normal map with just the R and G.

Ah, that makes sense now, V3 into a V2 from which V3 can be derrived again. Thanks for explaining it!

And I agree, I’ve been learning more about procedural textures and their utility, and I earnestly think it’ll be the direction we move in after the pipeline for developing them gets a little more robust. Procedurals can take up more VRAM and compute, but massively less storage, they can have infinite resolution, and certain versatility options that hand-authored ones don’t. Not to mention, as Nanite means normal map detail is needed less and less, the issues of normal maps needed to fake geometry decrease.

By the way, I just checked github ,strata just got a bunch of robustness improvements with AO and lumen, and I’m considering checking it out soon and testing it. Current engine builds are 160GB, and I’m trying to figure out where I can trim that file size, it is massive.