Apparently (if I’m not wrong), you can exclude them from RT traces with one of the commented tricks, but Screen Traces will be still there, causing even more noisy patches. (And disabling all global screen traces shouldn’t be an option).
You’re being a hypocrite. You end up ranting about me, telling me what to do when I already clearly stated that UE is miles ahead of anything publicly made.
Talking about how my stuff is unrelated(I have detailed post explaining why it is) to lumen yet you abuse the thread in the same way to tell me off?
I have to read you talking down on the engine and the developers.
Yeah? So what? I’m allowed to complain and criticize the engine for it’s flaws.
There’s not really another way to push forward and suggest fixes which I already have and constantly give.
As a user of UE, it’s my job to point out major issues that can cause quality slippery slopes in the workflow. As engine programmer(such as th lumen team), it’s their job to figure out how to fix major problems pointed out. Their knowledge, their responsibly.
If anyone else has a problem with me, PM me instead of being a hypocrite.
It’s extremely simple, if you attack me here on this thread, I defend myself here on this thread.
1+1=2.
Maybe translucent but emissive could do the trick for you since translucent materials dont show up in the lumen scene and since 5.3 also shouldn’t appear in screentraces anymore.
Using translucency will work in most cases, so long as you are willing to break the emissive part off into its own mesh. Trying to enable translucency on the whole mesh (even non-emissive parts) is just asking for problems (performance, sorting, shadowcasting…)
Unfortunately the last time I brought this up, this was the current stance.
So I don’t think we’ll see a per anything switch to stop traces that can’t exploit data already present in the gbuffer - like we’ve seen now with translucency.
A per shading model toggle would honestly be good enough for most cases I think.
But there are also some cases where regular, opaque geometry has a large mismatch between its screen trace vs lumen scene representation. I’m not sure what existing data within the opaque model could be readily used to determine whether to screen trace or not.
An example of this is when using certain features that don’t properly render into the surface cache. Distance field based effects, or camera vector based material effects don’t work in the lumen scene, but will work fine in the screen trace. This can lead to jarring differences in material appearance that are exposed at points where trace coverage ends. In these cases I’d want the option to skip the screen trace.
I just haven’t thought of a common thread that can isolate them without using that extra bit.
I’m surprised to see you say this honestly. I guess it depends on what you’re using the camera vector for, but screen traces fail horribly if you’re using the camera vector for any sort of parallax effect (presumably because Lumen’s screen traces are just tracing the depth buffer)
Raytraced hit lighting produces the correct result while screen traces don’t:
I still feel like it could be a good idea to include gi/reflection screen traces toggles in the post process volume, so that they can be disabled regionally. I suppose this can be done already in blueprint though so… meh
Also still hoping Substrate might offer some more flexibility about where screen traces are skipped.
No problem with sharing your opinion, that’s what this thread is for. But you are flooding this thread with useless information and make everyone else’s life more difficult because we (and the devs) don’t have much time to read through all the comments and filter out the useful informations. Many, many users asked you politely to stop spamming this thread, yet you keep repeating your opinion. This thread is to report bugs and issues to the developers, not for meaningless debates, comparisons and etc, which does not help their work.
Since the surface cache computes the materials from fixed camera angles / cards, in this example of a fresnel we can see it has blended together a capture from every side, resulting in the fresnel being totally broken.
For a screen trace, the fuller the coverage, the more accurately it will reproduce the effect - But it will still have a jarring discontinuity at the transition point regardless of method.
On a material like this, I’d ideally want to use hit lighting and skip the trace altogether.
But without hwrt I think I’d rather keep the screen trace, since it will at least be accurate sometimes where as the surface cache will be accurate never.
So the question becomes, if I have a scene with POM or some other effect that doesn’t play nice with traces but I don’t want to disable them entirely for that shading model - what can act as my switch?
The only thing I can come up with is that the custom stencil could… while it’s widely used for other stuff, it would be the least disruptive place I can think of.
You could have a particular bit or range to act as a mask to skip traces, and the rest of the stencil bits could work as normal.
People who need the entire range of the stencil buffer are probably a smaller edge case than screen trace artifacts.
It could be as simple as a checkbox and int range to select the bits.
“Skip screen traces on custom stencil value/range [x-y]”
Hi all, I have a question for anyone interested and for @Daniel_Wright ,
Why so much effort for current gen consoles, when they will become obsolete in not much time? (And not a lot of games will be released with UE5 before that date?). I think nobody would care about not using Lumen nor Nanite in current-gen consoles. Even more if using them requires some tradeoffs that users don’t want.
I have also seen comments pointing (in a well documented way) something like “this won’t be possible to Lumen because of -it’s nature-”. So, I wonder: was Lumen conceived with the current gen consoles as the main objetive? Was Lumen conceived as an scalable engine, to be able to evolve and cover the needs of trully next gen consoles and future PCs, in 5 years? Or it is already limited since it’s birth, and the solution/engine will be a new one?
Mid-gen refreshes are coming soon, focusing on current gen consoles is focusing on the most common PC GPUs. And the way lumen is setup, it’s swappable between different solutions behind the scenes.
We’re only 3 years since release of the PS5. The PS4 was sold for 7 years, and now - 10 years later - we’re still seeing cross gen launches, because the PS4 market share is still more than twice the size of the PS5.
In other words, it’s a smart bet to invest in technology that works well on current gen hardware, because there is a good chance that it’s going to be relevant for a long while still.
They quickly become obsolete from a technical perspective, but not from a market perspective.
Thanks, wasn’t aware of that and just fixed. Though not sure how useful it will be for rough reflections due to noise.
For standalone Lumen reflections you can also enable r.Lumen.HardwareRayTracing.HitLighting.ReflectionCaptures 1 which will sample reflection captures at the last hit point, just like deprecated Ray Traced Reflections.
Like jblackwell wrote, this one is tricky as Lumen uses screen space traces for improving quality and screen space traces sample what you see on screen.
If you are fine with disabling screen space traces then you can use Ray Tracing Quality Switch Replace node to do custom material modifications for world space traces (Lumen Scene).
Alternative approach is to use r.Lumen.ScreenProbeGather.MaxRayIntensity and r.Lumen.Reflections.MaxRayIntensity to lower exposure relative energy limits. This will reduce noise, but will remove energy.
When reading your feedback it occurred to me that we could set max emissive limit by substracting emissive value from the final pixel color. Need to try it though to see if it can be useful for various use cases.
It was removed because instead of retracing ray on a translucent hit r.Lumen.HardwareRayTracing.MaxTranslucentSkipCount times we now skip all translucent meshes. I’m not aware of any issues with it.
Yes, our current approach doesn’t scale that well to offline rendering. Those are two big R&D topics for us and we are searching for an alternative approach for the high end PC use cases.
We already do this for skipping GI screen space ray hits on foliage (subsurface or thin two sided shading models). As you noticed it’s hard to generalize it though.
Likely current gen consoles are going to be the main visual target for games for next 5 years. Maybe even more - in the current generation only recently first games targeting PS5/XSX+ started to appear. So that’s a pretty important use case to solve well.
Yes, the idea was always to start from the consoles and then scale down and up. Likely it will be a mix of reusing existing Lumen parts and creating new ones as needed.
I thought reflection captures were disabled concurrently with lumen reflections, as they were considered baked lighting? If not, then I have not been able to get the two to work concurrently at a project level, but I’m happy to attempt again.
To clarify on that, is translucency just wholesale not supported with lumen? And correct me if I’m wrong, but it seems like niagara particle effects aren’t showing up in offscreen reflections at all with lumen (not counting mesh particles or fluid sims).
They only work with standalone Lumen reflections, which are designed to allow to use Lumen reflections with baked lighting.
Yes, Lumen world space rays aren’t able to hit anything translucent. It’s something on our todo list to fix, but likely not in a budget which is acceptable for most console games.
Cutting out realtime rendered reflections could help out some game situations for studios.
Take an open world instance with world partitioning.
The world partition streaming can mitigate the 351 limit and handle any major changes in the world such as hourly or weatherly changes that would show with Lumen(Both GI and reflections).
Imagine the HLOD system expanded with a “World state” arrays(I think the Night version of the City Sample did that? I may be wrong).
Streaming of capture information could be streamed just like HLODs.
A Matrix system comprising of “world state profiles” created by the studio/environment artist.
Workflow such as building the worlds captures under “5:00AM__Raining” or “12PM__ClearSky”.
(Weather and any other conditions are named by studio)
After the dev says go, each reflection capture in the world grabs the static labeled objects with GI for the scenarios for the entire world.
Heck, it can also capture cinematic multibounce reflections on reflective objects in the scene(may not be visible if memory needs to be keeped in check with low resolution captures).
Would be a nice fallback for SSR.
If I’m missing something obvious, or if I didn’t explain something very well let me know.
Thank you very much for a so detailed reply. I don’t know how to quote you partially:
I have tested multibounce, working great now, thanks!
It’s looking quite near to old RT reflections, so I’m quite happy. As weak points, as you said, it’s super noisy for mid-rough surfaces, as a polished concrete. It’s even better to not use temporal accumulation for reflections, as it creates very ‘fat’ noise. (I’m referring to the second bounce reflections, the ones in the mirror).
Also, I have noticed increasing the max roughness is a performance killer, while in RT reflections doesn’t have that big impact.
This persistence of light in the last one (the light propagation, so the noise too, dissapear when out of screen space) seems related to r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.HistoryDepthTestRelativeThickness and/or r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.RelativeDepthThickness, which I had with a value of 0.1 instead of default 0.01. The higher it’s, the most splotchy. But as it’s not 0 by default, it will be a little noisy with the default value too, without the possibility to remove it at 100%.
And thank you for the rest of the reply! I also understand the commercial point of view, but I was affraid of having thought about “bread for today, hungry for tomorrow” (The possibility of Lumen scalabiliy being limited due to a nowadays-technology thinking, instead of a future projection).
To my understanding, the lumen team is currently very well aware of the reflections issues, they just haven’t had the time to fix them. Daniel Wright’s previously mentioned tonemapping hack is good enough in most cases, but I’m hoping we’d be able to get something in before 5.4 releases.
And on your note of scalability for current vs. next-gen consoles, it does make an incredible amount of sense where the lumen team is coming from. Given just how long it took for the next-gen consoles to hit a meaningful market saturation, all bets are off in terms of what the next-gen can support. As it stands, lumen still can’t easily be shipped on XSX/PS5, nevermind XSS. I’d imagine optimizations to hit console performance would be the priority, and once a given threshold is more or less reached, then eyes could be better turned towards enhancing the PC experience.
I suppose it depends on what you want to consider a cubemap, but lumen actually does support something quite similar to what I believe you’re elaborating on, if I understand correctly.
To support lumen distant lighting, it has a series of world-space probes that cache distant dynamic lighting to keep the cost low where the scene doesn’t need to be too dynamic. If you switch your PPV’s reflection method to ‘screen space’, it does something rather clever: it enables SSR, but uses those world-space probes as an automatic fallback. Should an SSR ray miss, it will sample from the nearest probe; while the probes are significantly lower res than the average cubemap, they’re generally good enough for rougher diffuse lighting.
I haven’t played around with it too much, but I know there are ways to increase the resolution of the radiance cache probes, so you may be able to actually extract quite good reflections out of them.
Sure! But I was really speaking about the second bounce reflections. The first bounce ones can be tweaked with some cvars, as the one you mentioned, but the reflections reflected in the mirror (so, second bounce and forward), can’t.