A real-time path-tracer definitely isn’t out of the question, and in fact Nvidia’s branch of Unreal Engine (NVRTX) already has something like that as a matter of fact. RTXDI is essentially path-tracing for direct lighting, and it allows for more or less unlimited shadowcasting lights (I tested it a while back, perf and stability wasn’t incredible however). I cannot remember if Nvidia support ReSTIR GI on UE5 yet, or if that’s just in the works.
I tried out Portal RTX and Cyberpunk 2077 RTX (terrible perf. on both), and what I found interesting was that it doesn’t really offer any apparent visual improvement over lumen. Both Cyberpunk and lumen cache long, diffuse rays (albeit via different methods), both handle translucency approximately (single bounce, no recursion from what I saw), and neither of them support caustic lighting. The only advantage I noticed in real-time path-tracing was higher quality GI in reflections, but it was only visible in extremely specular surfaces.
I’ll admit, I’m coming at this from a game developer’s perspective, so the use cases I look to are more focused on scalability than maximal quality. I think a real-time path-tracer for top-end scalability would be wonderful, but in game development there are many cases where you can trade technical limitations for a change in art direction. For example, I discovered you only need to make the roughness values of mirrors ~.15 in order for it to be blurry enough to hide the surface cache’s low resolution. If I really needed high-quality reflections in a scene, I’d make the scene costs as low as possible, put in enough punctual lights to cover everything in the scene, and enable hit lighting at the lowest acceptable quality.
I think moreso over a real-time path-tracer, I think the visual feature that could use the most development features at the moment remains rough reflection quality. Even at a new _main branch, it’s still rather unusable in many-light scenes, or scenes with many (reasonably-sized) emissive objects. I understand the incredible web of tradeoffs that reflection denoisers represent (ghosting vs. noise vs. perf etc), but I still find it somewhat worrisome that cranking the reflection spp count to its’ highest makes noise worse somehow, not better. Even in MRQ I haven’t been able to get stellar results with rough lumen reflections.
Beyond that, I’m just looking forward to when Substrate supports colored shadows like they said they would (if they don’t already), and hope it gets connected to lumen so we can have stained glass effects and the like.
I agree too. Noisy rough reflections are a problem since years ago, with SSR and Ray Tracing specially. I have needed to tweak tons of cvars, options and tricks to improve my reflections with some tradeoffs.
Anyway, I’m curious, do you have a simple test scene you could share?
Speaking about Portal, I think it certainly has some improvements over Lumen. I do not own Cyberpunk to test:
I have tweaked it to disable the roughness and normals of materials; clear lighting (not noisy) and quite accurate reflections, even when off screen.
Portal RTX is using real time path tracing. It’s no surprise it is higher quality than Lumen. It also wouldn’t be able to run on console hardware. Lumen is meant to be able to run at up to 60 FPS on current gen consoles, since they will be relevant for a long time. I’d love to see real time path tracing built into Unreal too, but it’s still fairly niche for now as only the highest end PC hardware can hope to run it at a playable frame rate anyway.
Portal and Cyberpunk both do have very noisy reflections at ~.1 to .4 roughness, just like Lumen reflections. It’s a samples problem inherent realtime glossy reflections.
Anyway, I can understand a Lumen performance mode for current consoles and hardwares, but if it’s a new Engine designed for the next 10 years (the first year is already over)… it should be a step forward. Maybe they will add it in a near future (or push Lumen to be more accurate and bruteforce capable, if someone want it), I hope. It’s understandable to firstly develop the performance mode, but the highest quality mode (unoptimized at the beginning, at least), should also not be postponed a lot, for those with ultra powerful machines and “industry” development, in my opinion.
@ZacD You’re right! But anyway less noisy than Lumen. Clamped to 0.25 aprox:
But, for me, beyond “the roughness issue”, which can be tweaked and improved in some ways, the most important difference is that this method is reflecting the environment without the very weird and poor Lumen’s “artifacts”, which are hardcoded-limited and (almost) can’t be improved, right now.
Hello, im new on unreal engine i try to render a shot for personal work and i have a issues, when i render i got a flash on one frames and i don t know why, i use ultra sky dynamics and easy fog from willaim faucher, if anyone know a solution it would appreciate, thank you have a good day.
I’ve been thinking on that a bit, and I’m actually curious if the lumen team has any thoughts on what improved surface cache denoising would look like. The lumen paper already goes into it a bit, I believe they mentioned some amount of spacial and temporal filtering, but beyond that I don’t know what denoising strategy is really used for the surface cache.
Because the lumen scene is in world/texture space the budget to facilitate denoising is likely rather small, but I’m wondering (if it isn’t done already), if the same sampling system used to call up higher-res pages for important surfaces in reflections could be used to prioritize a higher-quality denoiser.
There’s a possibility that the problem cannot be solved simply because there isn’t enough data to interpolate from, but it seems like final gather-quality GI in reflections is a very strongly desired feature for lumen. I’d be curious to hear the team’s thoughts on how feasable it is, or whether or not it’s a team priority at the moment.
I have been coding basic C++ for while now but I don’t think I have the skills to dive deep into engine source code. I was wondering if Lumens’ calculations use the maximum amount of macros over regular C++ functions.
I know UE uses macro’s in the source code but wanted to check and make sure Lumen’s algorithms uses as many macros as possible instead of C++ functions since macro’s are slightly faster ofc(replicate that slight performance boost a thousand times like in a game engine and that ends up being a big difference performance wise).
The reason I believe you would refrain from macros’ is because of readability, code length, app size, and no Compile-Time errors checks.
But due to Lumen and Nanites non 60fps friendly performance in large scale scene(including FN), macros, (if they aren’t massively used already for lumen) I would like to see or tbh just know if they could be implemented beyond the amount currently used?
Oh boy, seeing multi-bounce lumen and substrate working with pathtracer is sooo nice. Can’t wait.
I wonder though, if lumen can do multi bounce does that mean it will also look better than current 5.2? Like will a mirror in a bathrom no longer look like from a grotesque horror movie?
We do run some spatial and temporal filtering on radiosity (GI for surface cache, used for 2+ bounces), but there’s a bunch of issues due to surface cache representation quality and heavy caching, which limits any spatiotemporal reuse. Like if you heavily cache things for performance then temporal reuse possibilities are limited due to lag.
Indeed feedback could be used to prioritize GI quality, but at the moment I think the most limiting factor is surface cache representation.
So nothing concrete for now, but this issue is on our radar.
The reason that lumen reflections is currently limited to static lightmaps only is because lumen reflections need some scene representation to reflect. Since ray-tracing of any time depends on a world-space scene representation, it can’t use the screen-dependant world rendered to the GBuffer. Furthermore, lightmass has all of the diffuse GI data baked down to textures, so it’s quite cheap to evaluate. If you didn’t have that, then lumen would have to trace additional rays upon hitting a surface to figure out how that surface is lit. And at that point, you’re just doing real-time path-tracing, which is wildly expensive. RTXGI is still ray-tracing, it just may be able to compute the extra (expensive) bounces.
Fundumentally, any real-time GI system will have the problem that long, diffuse rays are really expensive, and you ideally want to avoid tracing them when you can and cache aggressively when you can’t. Lumen GI only traces short GI rays from screen probes and caches everything else one way or another. You could maybe come up with a better way of caching things, but as @Krzysztof.N talked about, there’s a strong limit to what you can do in order to not introduce too much latency. Short of neural methods perhaps, there aren’t too many good solutions.
From what I’ve read on Github, none so far. The lumen paper goes into this a bit, but they support translucent reflections by depth peeling (rendering the scene twice, first with transparent objects, then opaque). They trace reflection rays for the transparent surfaces, then do the same for opaque. They only do this for the first layer of transparent geometry however.
Side note: transparency, especially rasterized order-independent transparency, is one of the hardest problems in computer graphics, and I’ve talked to enough people way smarter than me on this subject to know I am not qualified to give more than a laymen’s insight.
To allow for transparent reflections inside of transparent reflections, lumen would need to indicate translucent objects and during reflections rendering to depth peel and trace rays for every layer of transparency on screen. I could see it as potentially architecturally impossible at worse give the surface cache’s compact payload and RT shader setup (again no transparency expert whatsoever here), or just extremely expensive at best.
Bear in mind, that’s just under lumen as it currently works. Nvidia’s custom branch of UE5 had much more robust transparent support and I believe handled recursive translucency. But as multi-layer translucent reflections is an expensive and specific use case, there’s a chance Epic simply decided it wasn’t worth the man-hours to develop.
From what I’ve read on Github, none so far. The lumen paper goes into this a bit, but they support translucent reflections by depth peeling (rendering the scene twice, first with transparent objects, then opaque). They trace reflection rays for the transparent surfaces, then do the same for opaque. They only do this for the first layer of transparent geometry however.
Side note: transparency, especially order-independent transparency, is one of the hardest problems in computer graphics, and I’ve talked to enough people way smarter than me on this subject to know I am not qualified to give more than a laymen’s insight.
To allow for transparent reflections inside of transparent reflections, lumen would need to indicate translucent objects and during reflections rendering to depth peel and trace rays for every layer of transparency on screen. I could see it as potentially architecturally impossible at worse give the surface cache’s compact payload and RT shader setup (again no transparency expert whatsoever here), or just extremely expensive at best.