I have also noticed something strange with the global lighting, but wasn’t able to repro it again (it happened me twice, with that scene, just after fixing the exposure values, so maybe it’s related to exposure): sometimes, after reopening the project and/or (not sure) the map, or duplicating the scene, the global illumination was quite more darker, if you can try some random reopening or duplicating.
that’s a depth sorting artefact. one of the pitfalls of translucency. it traditionally has no depth, aka does not write the depth, but reads it to decide if it covered up or not. the further the planes are apart the further away the artefact appears. shows you the buffer precision for z-culling. the artefact/effect is clearly visible on a sphere. i’m kinda puzzled how exactly it determines it’s covering up the backface. hmm that’s a dig into the shaders.
I see. It’s due to a too loose depth test when deciding which translucent should get front layer reflections. At the moment it’s controlled by r.Lumen.TranslucencyReflections.FrontLayer.RelativeDepthThreshold and you can make it lower to reduce those artifacts, but we need to make a better test here based on the variable depth buffer precision instead of a constant distance.
Yap, the constant distance is a little unconfortable.
I have find a value of r.Lumen.TranslucencyReflections.FrontLayer.RelativeDepthThreshold 0.00001 works fine to void this, but it makes dissapear some glaze from the glass ar a certain distance too, but very little noticeable. It has also a weak reflection of the environment.
I have tried too a value of 1 (less of this, will generate the “clip plane” when near the glass) and it has a higher “mirror” effect, reflecting the environment too much, maybe. It will also eliminate “background” reflections; for example, a pool behind a window.
found a lil thing in my “torture” map when i re-enabled the rgb cubes. attempted a beauty shot with screentraces. i dunno how exactly they work, but… in both cases the to be reflected ceiling surface is visible, although at very steep angles, but it does not manage the screentrace reflection (shot01). or has a visual limit where it can pick it up (shot02). the walls however are fully screentraced (or they’d look very pixelated. it’s a “torture” map. ).
note: it looks beautiful nonetheless. it improved quite a bit (how far you can stand away from them and they still light up the wall nicely,) since i started this map in 5.1 (i skipped 5.2, jsyk nvm).
Not sure if I should report it here or elsewhere but on the Github 5.4.0 version, using Lumen and VR crashes the editor immediately once you start VR. From 5.1 until 5.3.2 using VR & Lumen works nicely and that combination is really great for Archviz-dev. Crossing fingers this gets picked up and fixed by the team before the official 5.4.0 is realeased.
Edit - link to a crash report
LoginId:c0b4334942f3e950c749a09abbc76528
Edit 16-nov: seems like this crash is not related to (only) Lumen. In the latest Guthub version of yesterday (15 nov) Lumen & VR works fine, I do get some crashes when Nanite is enabled AND VR as well on meshes from 5.1 projects that are converted to 5.4.0. Without VR, it works fine. Have to investigate more…
Assertion failed: (Index >= 0) & (Index < ArrayNum) [File:C:\Github\Engine\Source\Runtime\Core\Public\Containers\Array.h] [Line: 743] Array index out of bounds: 1 from an array of size 1
I’m honestly surprised it works, I was under the impression that the lumen team largely didn’t think lumen+ VR would be viable, on account of the pixel-scaled costs of lumen and the massive pixel throughput of VR. Are you using upscaling of some sort?
@jblackwell I was really surprised as well when it became possible in 5.1 - For the work of an architect like me, this option is really really really wonderful.
I tested with DLSS half a year back but nowadays I don’t anymore. I think it broke at some point and I never bothered again to re-enable it. Will have to try again…
I use it on my desktop which I upgraded with a 4090 just to be able to get some decent fps.
During design, I now never bake lights. Only when having to do a presentation at a client (using a laptop with a 2070) or when making videos (using a stabilized dampened VRspectator) I bake lights.
Lumen+ VR is comically powerful. I remember experiencing the Oculus Quest 2 for the first time and being blown away by VR as a spacial medium, even with the incredibly limited environment fidelity and baked lighting. Now that we can have dynamic, per-pixel GI in VR, the possibilities for actually delivering on VR’s ambitions are a lot closer. I’ve had a fair number of people hype VR design to me, but I haven’t yet found a headset that meets all my needs.
I feel like I understand most of how lumen is architected at a high level at least, but even after at least skimming this paper I can’t quite understand what it’s talking about. The project leader is clearly very enthusiastic and aware of the latest developments in computer graphics, but I can’t quite understand what he’s talking about.
The core idea of trading off radiance solving methods at different resolutions does appear to be true for the both of them: lumen’s tracing methods go: contact AO-screen probes-world probes-skybox, and the things it’s tracing against are screen space- surface cache-far field- skybox, at least to my knowledge.
If I understand it correctly, the radiance cascade paper is making the argument that real-time path-tracing via interpolating noisy samples can’t really work (due to the amount of interpolation needed), and in a sense lumen works similarly: the screen probes represent many rays (8x8 normally) bundled together, effectively sampling 64 rpp. Those probes aren’t placed every pixel however, just wherever is going to gather optimal lighting information, and the radiance probes are then interpolated to the GBuffer. This means that the lighting contributing to the final image is actually incredibly stable.
what is this about? throwing buzzwords? that’s very amateurish, sry. maybe you should learn to build and bench lumen.
lumen does all that. and better. those cascades fail to impress. looks like fuzzy shadows and that’s it. no real GI shown. and lumen does proper GI, which is not bound by screen space.
what do you think illuminates the ceiling? that lamp is in the sky and very much offscreen. so is the floor. screenspace GI w/e. that would not do that.
I’m farer than you to understand it, @jblackwell, but you have clarified me some things, thanks.
Thank you for your kind words @glitchered . Sorry for not being as expert as you are.
I have posted it because it has been published in 80 lvl, so it must be ‘rich’ in some way. It could be interesting to curious people and/or the Lumen team.
I don’t know if it’s only screen space or not, but I have seen some parts in this video being something like offscreen. Don’t know if the GI, but at least the ‘direct lighting’:
Anyway, you can’t tell me that Lumen has no limitations, noise, nor performance issues.
this looks cool. nice soft lighting. but i don’t see much interaction. this is not GI per se. it’s volumetric illumination, but no surface color pickup and transport of it. bounces. the core of GI.
it’s like just emissive shapes transporting light via a volume. all white surfaces, no bounces. might aswell do it with a modified diffuse term and some lamps inside the beetles and statues.
The paper did show reflections working, although I still cannot understand how. From what I’m reading the paper is more concerned with a final gather method than the media that is actually being traced against, but I could be wrong.
yep. the paper has some nice shots. the coder is definetely a demo scener.
thinking about technicalities, it seems to be a mip mapped volume filled with light intensity values. and it’s raymarched. when you hit a surface you march along the normal and get the diffuse response. when you march along the reflection ray you get the specular response. i done some marching myself. just some ice clouds tho.
He has some videos in his channel, with some different situations and tons of different advanced experiments, not only realtime GI; even created a Path Tracer 7 years ago. Some demos are newer, some older. For example, in this one, even if it’s screenspace, it’s from 5 years ago (!). The new demos seem to have offscreen, and I suppose they could be animated as this one, too:
Noo, don’t worry! It was a sincere comment, just informing and thanking you, not reproaching.