The issue I have with your posts is that you repeatedly explain why the results of those who don’t experience significant performance issues are irrelevant, arguing that their tests simply don’t cover the problem areas and thus their performance appears “good.” This implies that you are aware of scenarios where Nanite performs well, but you only highlight those in which it causes performance drops. However, when it comes to your own example, where you show performance gains by turning off Nanite, you don’t explain how one might optimize for Nanite specifically – not for LODs, but Nanite. Even though others have shown you that various factors are at play, you simply put all the blame on Nanite. That’s, of course, an easy position to take.
I understand the argument that it’s often easier to criticize developers. But, honestly, if Nanite were truly that poor in performance, developers would have noticed it during optimization – well before release. Bloober Team, the developers behind Silent Hill 2, are experienced and have worked on several titles in the past. It’s hard to believe they would release the game without knowing about these issues; they may have knowingly chosen Nanite despite some performance loss. I don’t know if you work, but in my job, I regularly see tasks that simply need to be completed, even if every detail isn’t perfect. As long as something is “good enough,” that’s what matters.
Another issue is how you present performance gains. You mention a 40% improvement, but don’t specify whether that’s in FPS or render time (ms). Here’s a simple example: at 10 FPS, adding 4 more frames means a 40% increase in FPS, but the reduction in render time is only 28.27%. This difference matters because FPS can be misleading, as the actual gain in ms is much lower. Increasing FPS from 60 to 120 yields a 100% increase, but the reduction in render time is only 50%. These details are essential for clarity, especially when speaking from a developer’s perspective rather than a consumer’s.
Hardware also plays a role. Newer hardware lessens the render time differences, depending on whether Nanite is enabled or not. Your argument could just as easily apply to ray tracing: although it reduces FPS, it’s increasingly embraced on modern hardware. Many graphics features we appreciate today were previously only possible with more powerful hardware. Even if Nanite currently requires 40% more resources, this difference will matter less as hardware improves. For instance, with an RTX 3060, the difference between Nanite and LOD might be 40%, but with an RTX 4080 or 5070, it might be only 10% or less.
With Nanite, Epic Games has pioneered future technology, much like NVIDIA did with ray tracing. Hardware will need to catch up, as it often has throughout gaming and graphics technology history. Take Crysis as a classic example – when it launched, few systems could handle its features. Even back in 1998, Unreal didn’t perform as well as other games, but it looked considerably better. So if you really want to show that something is better than Nanite, then please give us a demo so that everyone can see for themselves. That would make this discussion far more productive.