Soo much negativity around the new features, just because they arent meant for current gen.
Nanite is a godsend and can save weeks of tedious optimization work, while allowing you to use hundreds of thousands of meshes while paying 0 fps for them. (I tried with 100k, in addition to what was already there, zero difference in performance. (That was really a mind-blowing moment… seeing a 0 fps impact.))
Nanite scales on a level that was almost unachievable previously, while saving a lot of time… and now it works with Plants too… cant wait for the day it will work with translucent materials. (That would make me very happy )
Lumen is awesome for what it is, even with its limitations. Just imagine for a moment what we got: (relatively) Dynamic GI that runs just fine on last gen cards, while still achieving 30 fps on older midrange cards. And it doesnt even care what brand is written on your Graphics Card… AMD, Intel, nVidia, etc. - it just works.
No baking, you can just turn it on/off at runtime, adjust it at runtime etc… for decades we wanted something like Lumen, now we got it.
==========
Yes, yes, there are bugs, issues and missing parts - but think back a moment: just a year ago, I was poking the Lumen Team and Epic about the Foliage in Lumen, and it (fixes) barely made it into 5.0, just a day or so before the deadline was. (Its either in this thread, or the previous one)
Both of these features (Nanite and Lumen) are still very “young”, and they will need a lot more time and effort to support everything properly. (Afaik, Custom Stencils didnt work with Nanite until 5.2 for example…)
The “performance” side of things is to be expected, since Nanite trades GPU time for CPU time (in the end, it comes down to this), and Lumen does have a base-cost that you wont get rid of in terms of things that need to be calculated, thats just how it works - for everything else we have baked light.
The perceived performance issues on the stagnating “low end” (since 4050 is now 4060…) cards will disappear over time, as new hardware arrives - which is probably why Epic isnt investing too much effort into “scraping the barrel for fractions of a milisecond”.
The performance Delta between the Low and High-End cards is just bonkers nowadays… somewhere between 3x and 4x, depending on what you run, and that isnt even the fastest card nvidia could have made, there still is a larger chip that could have become the 4090 Ti if AMD had decided to also push its cards beyond the current power draw. (go and look, the coolers exist… ridiculous thing, uses actual copper plates to reroute the power input . https://cdn.videocardz.com/1/2023/07/TITAN-ADA-RTX-4090TI-COOLER-4-1.jpg )
What we currently see with “bad performance” of Lumen is completely normal and always happened when technology was pushed. Remember Crysis?
The sole reason we could run “modern” games on something like a 3060 at 4K was the fact that we didnt have had “next gen” games at that point. Whenever a console generation changes, a few years later we see system requirements go up a lot - which is what we see right now. (Remember how everyone was like: 8GB VRAM is enough, just a year ago?.. yeah, and suddenly it isnt anymore, especially not if you try to run RT - and it bites nvidias butt right now. (Meanwhile I am sitting here laughing, with my 16 GB 6900XT having no issues whatsoever.))
==========
Wall of Text ends here
I also recommend to not make games (right now) that require Lumen being enabled for gameplay reasons, it just wont work out. (Been there, considered that, calculated it, shelved it.)