Additionally: I was talking to a colleague about MegaLights scalability- is there a particular reason software RT quality is significantly worse than HWRT?
It just an one-off made for the demo in some Blueprints.
We didn’t invest much time in it. Some improvements are very simple - increasing screen space ray length, setting up proper biases etc. Some would be harder - tracing mesh SDFs instead of a global SDF.
The question is though whether it makes sense to invest into this path. Given how it will take at least a year or two for games to start shipping with it, likely at that point HWRT will become a requirement for games. It’s really down to deprecating 10x0 series. Those were some amazing GPUs, but maybe a decade later we will be able to skip those :).
I see your point but some studios may be close to release and may update their products to use MegaLights without taking lot of extra time. I mean, some games could be already using it in some months.
Maybe an intermediate point could be the sweet point. Something like appliying the 80-20 rule to this.
PS: I even still didn’t try Software MegaLights. Just as an idea.
a smart studio does not change the rendering method while in late production. you’re wrong there. it adds a lot of xtra time to add changes and fixes to a whole game. or it’s gonna release in a beta state. an indie dev with short dev times may have a chance to update, but a good advice is to stick to the original concept stage of how it looks.
gotta use the engine at stage zero. fresh start. and only alpha stage graphics development. hmmhmm…
Fair enough. I think better visualization modes would be really helpful for MegaLights usage going forward, although if light complexity works with MegaLights now that would be perfect- just being able to visualize expense and light density as a field rather than per-ray would be a good additional option.
I don’t disagree; it doesn’t even seem like SWRT is providing a meaningful performance win in a lot of cases anymore (at least I’ve heard anecdotally). Is there a point in the near future (EG 5.7) where SWRT will just be fully depricated?
Why would you want that? I think it just was like a “patch” to make RT shadows work without (but not fully) black patches. But megalights don’t seem to need it.
What that CVar does, as far as I understand it, is make full-fat nanite geometry get streamed into the RT BLAS, as opposed to the simplified proxy geometry usually used by nanite. The reason that is made RT shadows work is bc RT shadows had discontinuity problems between the GBuffer mesh and proxy mesh- Megalights bypasses it entirely via their screen-space traces.
That said, it’s still not perfect. In areas missing screen information, the proxy mesh still shows through, and especially on certain models, the default mesh can look terrible compared to the nanite mesh. This is why I’m curious if the nanite RT streaming capability looks solvable or not. If it’s not solvable, then a better proxy generation algorithm might be useful.
Of course, if you’re hand-authoring the proxy meshes, whole thing is a moot point anyways.
Yeah I can confirm I just tested SWRT megalights with my 1080 and it doesn’t work good if at all in my actual project. Costs almost 3ms total which is weird because when I tested it in the template it kinda worked. Maybe it’s just my project, idk. Tried a lot of console commands but no luck.
Edit: It seems that combining Lumen/lower screen resolution with TAA produces a decent amount of performance gains while retaining some level of visual clarity/decency. I’m not sure if it would be more performant without lumen, will try tomorrow, but I’m curious to know if there are any console commands that could also improve performance.
Question about SWRT and HWRT: WIll my game automatically detect which type the user has(if support HWRT is enabled in the engine)? Or should the user have to enable HWRT to get the benefits of it?
Is anyone aware of any CVARs that could help improve performance for SWRT+VSM megalights? I got it working decently by lowering resolution + using TAA/upscaling but I definitely can’t have as many lights in an area as I probably could with an RTX card.
Anyway, I did more testing with SWRT+VSM megalights with upscaled resolution(on spotlights) and came to the conclusion that it seems if you don’t use a lot of overlapping lights, then regular lighting is slightly more performant, but if you have a lot of overlaps then megalights is basically required.
Yeah, we should fix lighting complexity. Added it to the todo list.
We don’t have any plans to deprecate SWRT. Mesh SDF tracing indeed can be slower, but Global SDF is usually faster even on new NV GPUs. Global SDF merges entire scene into a single volume, which makes it low quality but constant performance. HWRT performance heavily depends on scene optimizations - things like instance overlap or huge instances with lots of empty space are very costly.
Unlikely, but we are working with vendors on a proper Nanite Ray Tracing solution.
It’s also something we have on our todo list.
Yes.
It’s an experimental feature inside an experimental feature, and at the moment MegaLights only help with VSM projection and shading costs, while shadow map generation is still done per light like before.
In general I’m conservative with exposing things for an experimental feature, so it depends whether you have a good argument for it.
Nothing concrete to share, but it’s something on the todo list.
Thanks for the answers. Actually per-light doesn’t matter now that I think about it.
Basically my problem is I’ll have SWRT and HWRT players, so how should I detect if they’re HWRT capable? I figured out I can change the shadowing method with default mode and the CVAR.