But what do they look like without Temporal AA/SS?
They looks same without AA, but temporal component (that you personally don’t like at all) now involved in to Lumen reflections themselves it seems - reflections gradually goes from blurry to sharp in a second when camera stops moving. Very nice improvement over default blurriness.
I’ve seen those artifacts on Vulkan, but everything seems to be running fine on DX12 with my 4080. Are you running with DX12? Which GPU it is?
It’s DX12, 4070 . i7 9700k intel 10th gen. Substrate disabled, MegaLights disabled.
tried a driver update? nvidia and unreal had some compatibility issue the last couple weeks. mhhhmm
also… interesting you still rocking a 9700k. 8 threads are nice, but even the ps5 has more to offer if it’s pushed to the limit. tbf… the editor and engine is not that cpu hungry. mostly gpu.
back in the lab. i never noticed lumen does full hit light bounces on characters. or is that new? i think i was too focused on the world shading. about those mirrors tho… and now being able to use baked gi.
how high are the chances to get a shader combo that does realtime gi in the main trace and uses the worldspace lightmap volume for static in reflection gi on characters? not necessarily the full lightmaps. just a basic volume sample and diffuse term to light up the dark shadows on characters. i’m aware this shot is screentrace territory but if i would turn up the camera to upper torso it would fall apart in the reflection.
also noticed the surface cache noise in reflections has gone down a good bit in this test shot. which is good. reflection hit skylight 1 is still a secret option too - for color artists that’d rather juggle and blend the gi ambience and skylight. nice.
CPU upgrades are on the list but not really needed, I haven’t been bottlenecked by it in anything but games for a while. Besides, if I really need the power I can OC it a lot and get it for most things.
I’m definitely loving the new hit lighting GI. For most game scenes it doesn’t make a difference, but anytime there’s complex or dense geometry, it can be transformational. Obviously costs an arm and a leg, but the shading difference now between lumen and PT for anything opaque is very, very small. I haven’t tested out the new translucency behavior, but I’m kind of amazed at how little the visual difference is in most scenes.
Plus, MegaLights is just superb. I’m having to squint to notice the tiny differences in noise between standard RT and megalights, and it can handle so many more lights (although I don’t particularly understand the debug viewmodes atm).
I am sorry @Krzysztof.N , I confused and I gave you my desktop specs when I got that screenshot on my laptop: that screenshot was taken from a Framework 16 laptop, which has a Ryzen™ 7 7840HS, and a AMD Radeon™ RX 7700S mobile GPU with 8GB VRAM. Everything else I said was true, HWRT, surface cache, no MegaLights. Error does not seem repeated on my desktop.
In addition, viewmode performance is extremely bad- about a third of what lit mode is.
In addition, SSR is completely broken in 5.5 preview- when lumen is enabled, all reflections disappear with no fallback.
That sounds great. To have some visual/rendering improvements.
Do we have some comparisons somewhere ? I read the UE 5.5 Roadmap but I could not find anything about “hit lighting GI”.
Here’s a very artificial case to really illustrate this.
This is a material which outputs the camera vector as RGB. Screen traces are off to make it even more obvious.
Exhibit A: Surface cache:
Because the surface cache is rendering cards, the view vector of the card is baked into them. This would cause a material reliant upon this kind of effect to look wrong in it’s card. Fresnel is another example of this.
Exhibit B: Hit lighting for reflections
Here, the only difference is in the shiny gap/ grout line between the tiles. With reflective hit lighting, it is now correctly picking up the true camera direction vector in reflections.
Exhibit C: Hit lighting for reflections and GI
Now the light being emitted from the bottom and backside of the mesh are also red because they are properly evaluating the cameras view direction instead of the view direction the card was baked from.
If lots of geometry in a scene has an inaccurate surface cache representation for reasons like this, the GI of a scene, especially in cases where screen traces fail, can be totally wrong.
it’s basicly this… gi “color transport”
it bounces light off characters aka skeletal meshes. i did that on huge buildings already and some test chambers. but characters are not part of the lumen scene. they are potentially bvh components. i noticed the bounce on the white skin, when i set up the reflection test. that’s why i changed the tint to see if the red pops too. and it does.
I see, so skeletal meshes now have support for casting GI? I didn’t even think to test that, but that is really exciting. Between support for skeletal meshes, support for GI from animated materials, and what @BananableOffense has documented, we have a much more robust lighting path. Particularly with the directionality-influenced materials from substrate, the lighting can support those complex materials in a much stronger way.
at some point i set her up and bounced a spot lamp on static world geo, but… i never checked if it bounces on herself. this is good for believable character interaction. or a general lighting detail you can throw in. yep
When you enable hit lighting for everything you will get materials and direct lighting calculated at every hit. Just like it was in case of hit lighting for reflections, but now you can enable it also for GI.
Still, secondary bounces and skylight depend on the surface cache. Something for the future release…
defo cool looking. almost pathtraced. i can’t imagine how you did it to run in realtime. i know pathtracing is mostly backwards reliant. like… hit the arm from the camera. bounce and sample gi somehow and follow the reflection vector to hit the chest. sample gi again and bounce to get the light to make the red material shine. then go back and apply it to the arm as reflection. it’s kinda magic code to me. but it loooks goood.
I know it’s been said that the cost is pretty heavy. Is there any possibility to have this overridden on a per mesh level? It seems like most meshes would not benefit from this feature greatly, but every now and then it might be important. Turning it on for the whole scene could be overkill. If we’re can exercise more control on when to evaluate the material, maybe it’s open the door for secondary bounces and whatnot. This could look like “Number of material hits to evaluate” for reflection and GI each. Setting to 0 would force surface cache only.
Can you try a driver update? We have seen similar issues on NV GPUs and reverting driver to a previous version fixed it.
Are there any Lumen lighting related processes done as objects are drawn to the main render targets?
I’ll talking about a lighting render target that’s updated along with the albedo/roughness/velocity/ etc (if so can you recall the changes if any over versions?)
Maybe it’s just skylighting that’s done, no harm in asking here first before doing a software capture.
(Edit, it’s just skylight related processing)
Also, I know making Lumen more compatible with non-nanite HISM instances doesn’t follow the “make everything nanite” agenda but nanite is such a joke in terms of performance and visual quality. Also, please work on denoising Lumen better with these settings:
r.Lumen.ScreenProbeGather.Temporal.RejectBasedOnNormal 1
r.Lumen.ScreenProbeGather.Temporal.NormalThreshold 2.7
r.Lumen.ScreenProbeGather.Temporal.MaxFramesAccumulated 25
Almost fine for fast motion except salt and pepper aliasing on anything newly disincluded or objects in the distance that get shaken up by camera jogging movement. Either work on detecting these area’s and force computations on resolving them quicker or denoise it better with something effective against salt and pepper (such as FXAA of all things, you’ll notice it). Or just implement a good fallback interpolation.
EDIT(hardware)
Hardware: Desktop 3060 at native 1080p, this is a 13 teraflop machine and I have 3 instances of faster/more stable GI that run great on this hardware. It’s also not that far from 9th gen and NO, PS5 pro is not target and should not be target since everything on base ps5 looks like temporal SLOP or runs at a resolution so high gameplay takes a massive hit in standards that should be set by now
Software(cheaper than hw) probe gather on High cost 2.6ms and 2ms without shortrange AO (which looks terrible and unstable without incompetent TAA hiding the noise). The cost is does not change based on how dynamic the scene is, so again I ask and URGE your efforts to work on making ways to take advantage of less dynamic environments such as static rooms that need to light up when a door or window opens and adjusting the update rate globally is inefficient.
EDIT:
Please implement subpixel jitter awareness to the RejectBasedOnNormal mode.
If I use half competent TAA(we run post process edge AA before DOF to anti-aliasing stair cased edges subpixel jitter misses)
Lumen fails to remain stable on object edges due to no good, coherent fallback.
Please test with these r.AntiAliasingMethod 2(TAA) at 1080p, v-synced to 60fps :
r.TemporalAA.Quality 2
r.TemporalAACurrentFrameWeight .6 (with vsync)
r.TemporalAASamples 2
r.TemporalAAFilterSize 0.09
r.TemporalAA.Upsampling 0
r.TemporalAA.R11G11B10History 1
r.TemporalAA.HistoryScreenPercentage 100
r.TemporalAACatmullRom 0
If you are on a 4k monitor, use
r.ScreenPercentage 50
r.Upscale.Quality 0
Test if r.TemporalAA.HistoryScreenPercentage looks more clear/stable at 50 or 100
you’ll capture it anyway. you’ll figure it out. disassemble it. do a deep dive into the engine for a video. not this shallow bash style video disassembling the frame output. you can also pick up the engine source code and see how it works. maybe mod the engine. hmmhmm…
If you’re aiming for maximum performance, why not port Nvidia’s RTXGI plugin from version 5.0 to the engine you’re using? It was a highly efficient version based on dynamic probes, offering significant scalability across various configurations, and it runs on Nvidia, AMD, and consoles.