My bad I must’ve made a mistake
Hi first of all congrats for the amazing work @Krzysztof.N @TiagoCostaUE ! I have some questions regarding scalability.
-
In the long run, Megalights are meant only to work in high-end PC and next gen consoles or are you planning to scale it down a bit?
-
How do you think we should work with megalights in a game that is intended not only for next-gen consoles, but also for older consoles/medium PC spec that do not support them?
Are you sure you only get glitches when using MegaLights? I noticed also glitches when MegaLights is turned off so I expect something else is wrong with VR & Lumen/ reflections in 5.5p1. You can test by creating a brand new 5.5.p1 Archvis template project, adding the VR template pack and openXR plugin.
Great work on this. Adding my wishes for this to be supported in VR for deferred rendering, but in order for us to be able to use 5.5 in VR at all , the very long-standing VR shadowing issues must be fixed as well.
you expect a lil much, tbh. it does deliver nice fps on the average pc screen, but… vr is a tall ask. you asking for like 120 fps deferred in like 4k+ to feed both of your eyes with content, but… it’s not there, yet. unless you couple 2 or more 4090s to your eyeballs, the matrix gotta wait. you gotta play that on a screen for now, and chill in the real world.
What if we don’t use lumens and just use Megalight?
Not really.
Current gen consoles are already pretty challenging, so not sure if it’s possible to scale down further.
Also MegaLights is experimental and game production takes years, so it would take at least a year for a first game using MegaLights as their main lighting technique to be released. At that point it’s unlikely that anyone would still want to target prev gen consoles and likely games will be able to target RT PC GPUs as their min spec.
The plan is to make a light scalability tool, which would allow to modify any light property per scalability level. For now you can do it either using BP or add custom engine modifications to enable similar workflows.
It should be fixed now.
I just fixed a crash in 5.5 preview when MegaLights is used without Lumen, so it should work fine now.
That’s really good to know on scalability options. I’ve run profiling on the GPU side to understand performance, but what does the CPU utilization look like?
There isn’t any CPU overhead. MegaLigths are fully GPU driven, so the only thing from the CPU we need is just a small structure describing the lights.
There’s now r.Visibility.LocalLightPrimitiveInteraction which allows to skip most of the per light CPU overhead when all lights are MegaLights (without using VSM shadowing). It’s a stopgap hack/solution before we’ll automatically skip CPU overhead for lights using MegaLights.
What if we don’t use lumens and just use Megalight?
To my mind, he was asking if MegaLights can function as a replacement for Lumen? Does one require the other, are they meant to be used in tandem?
And my 2ndary-question: I thought I read that ML was to replace Virtual Shadow Maps?
I’m coming from a point that there seems to be a whole-lotta-lighting-options coming down the pipe, so what’s the overall vision going forward? Is there something we should avoid investing in?
I figured if it was going to replace anything it would be the old raytraced shadows, since they have major self-occlusion issues on Nanite geometry and aren’t as performant as megalights.
i’ll have to jump into nanite at some point but rt shadows are still good for traditional “low” poly geometry and especially useful for the directional lighting pass and better soft shadows. they compute every frame but come without the memory footprint of vsm. i wouldn’t throw them out, yet.
MegaLights handles direct lighting, and Lumen handles indirect, so they solve different problems. Through there are some potential future areas of overlap like emissive and skylight.
MegaLights can work without Lumen.
The overall idea is that moving forward people will use HWRT Lumen together with MegaLights. This way you need to only deal with one ray tracing representation (BVH) and BVH build times are shared between both, so you can spend more memory and GPU cycles on a higher fidelity BVH. It will also open the door for more work sharing like denoising etc.
It’s not a coincidence that we did a lot of HWRT work over the last 6 months and want to move everyone to HWRT Lumen :).
It’s a straightforward (future) replacement for ray traced shadows.
As for the VSM I expect that most of the local lights would move to MegaLights. GPU budget is limited, so can’t just add MegaLights on top of existing DeferredRenderer/VSM lights.
Later as denoising, RT path and BVH improves, MegaLights should become our default solution for at least local lights.
Thank you very much for your quick and detailed response. I know most of us appreciate your efforts in this regard.
Genuinely appreciate. Now go away and have a nice weekend
I am getting a bug using megalights with metahumans. When i turn on megalights in post process, the textures on metahuman turn white. It is very weird, the light is working, the indirecty light shows textures on the metahuman correctly but the direct light renders it completely white. Is there a known limitation or issue with megalights with metahumans?
I am having an issue with a night scene rendered in MRQ with Megalights enabled. Same scene disabled, ground shadow is fine. When megalights is enabled, the ground shadow from pointlights becomes dark and blocky. See the image - this issue does not show in PIE
What are your MRQ settings? I’m specifically interested in Anti-aliasing, Spatial Sample Count and Temporal Sample Count. Likely something there doesn’t play nicely with the MegaLights denoiser.
The example I posted was using 30 spatial and 3 temporal. After your comment, I bumped it higher and could watch the render get iteratively better as subsamples were added in the render preview. It took to around 1800 subsamples until it looked like the screenshot with Megalights disabled (the correct ground shadow). It didn’t seem to matter if it was high temporal or high spatial as long as the combined hit 1800 subsamples, the shadow was fixed (so 42 by 42 worked, also 180 by 10 etc)
Hey! I think this is more related to shadows than megalights, but I was wondering if you could tell me the reason why in some cases like this Raytraced shadows look weird to me, what are they tracing against to cause that lack of hard shadows like VSM.
And the other question is if do you know by any chance if in the Megalights demo they only use VSM shadows or RT shadows were there too?
I guess VSM are more performant, but is there a huge difference now with megalights enabled?
MegaLights has its custom ray tracing pass, which doesn’t share anything with Ray Traced Shadows or Deferred Lighting outside of the trace ray call.
As for the missing shadows, likely that mesh in BVH doesn’t contain those holes due to automatic simplification. Take a look at RT debug view (or Lumen Overview) to see what’s inside the BVH. Play with Nanite fallback mesh settings in order to change that mesh. There’s also r.RayTracing.Nanite.Mode, but it’s quite slow (like not usable outside of offline rendering) and not well tested.
Demo uses only ray tracing, as this is the only scalable option. When you enable VSM for MegaLights it does skip some of the cost, but it still has the per light cost for the VSM setup (memory, CPU and GPU overhead for generating shadow depths per light). With ray tracing there isn’t such per light cost. No matter if it’s 1 or 1000 lights, it’s still the same ray being traced against the same BVH.
This information should be in tooltips (I just changed them a bit to make a stronger message).