What’s acceptable performance for you? Because i’m getting 15 fps on a 3070 in 66% of 1440p, without lumen, in an empty scene where there is just landscape present with nanite and tesselation applied. No trees no anything. It does not look acceptable for me. This won’t be acceptable for generations in the future even if performance demands won’t change. But they will. The ones who’re comfortable playing 60 fps now will upgrade to 144, and 144 to 240 e.t.c. And that does not look achievable even in like 6-7 generations of gpu’s
Well you’re saying i can play it with Nanite? How exactly do i do it then? I would love to but something tells me it’s gonna be a slideshow on my 3070
I was saying, asking, how do we move to the next generation of graphics without a paradigm shift ?
RTELOS: I’m guessing I’m in support of what this thread says, in general. If you have a somewhat lower poly game, or if you want to make a game with “current gen” graphics, then probably nanite is detrimental for that.
But for me Nanite seem revolutionary, magical almost. The power to push as much polygons as you want with a fixed cost. In my case - be it off line rendering or real-time games, the limits of polygons was ever present. Either limiting how many objects you have on screen, or limiting the quality, or having to accept visible transitions from higher poly to LOD.
I’ll go, by not depending on garbage, blurry TAA and frame smearing.
9th gen deserves a crisp motion presentation, 60fps gameplay, utilizing the several techniques developed at the end of 8gth gen. I’m all for raytracing tech as long as it performs well(which their are examples of)
Lets not waste performance with GI methods like Lumen that iterates an insane amount times when nothing in the scene even happens or moves. It’s temporally unstable without TAA and still produces major artifacts. We’ve barely scratched the surface with GI methods made during 8thgen.
Mesh Optimization systems like precomputed meshlets for draw call reduction, better LOD algorithms based on overdraw since the one in UE is complete trash, Forward+ for cheaper and MUCH more stable motion.
Overdraw is not just a performance issues, it’s also a visual issue that nanite doesn’t solve. A paradigm shift is exactly what we need and we’re not getting it. It’s 8th gen all over again just exaggerated with constant blur & smear abuse.
That looks like concept art more than Nanite, but I agree with your point.
@TheKJ Have you tried switching over to Shadow Maps instead of Virtual Shadow maps? I know they suggest using VSM with Nanite, but I just switched back to Shadow Maps and gained 30 fps in the editor.
There’s no need to play games at native resolution with DLSS or FSR. Play the game at 1080p and you’ll get 50+ fps on that hardware in the same scene.
But I also have a feeling your GPU is not your bottleneck because I can run an entire scene full of nanite at a steady 50 fps and I’ve only got a 4070ti so it’s not THAT much more powerful than your 3070. Besides, by the time UE5 is finished, we’ll be into the 6070 GPUs by then so it’s all moot.
Even if today those are concepts only, how do we move to real games like those tomorrow ? I wanna play a FPS in a forest like those screens.
I’m not against Nanite at all and I’ve said multiple times that its performance will improve with future engine updates, but also hardware updates will make it all moot eventually too.
I was just pointing out to you that those aren’t screen shots
I’ve said multiple times that its performance will improve with future engine updates
That’s not helping the multitude of games with butchered performance using the current iterations.
but also hardware updates will make it all moot eventually too.
We already had that hardware update and the potential is being squandered in a manufactured sense. Game developers needed better workflows for real optimization, but Epic doesn’t care about that field anymore since they see more money in virtual productions from disney.
I wanna play a FPS in a forest like those screens
And you don’t need nanite to do so. That’s what your refusing to understand.
You know that you can use a combination of LODs and Nanite right? It also seems strange for you to make statements like “That’s not helping the multitude of games with butchered performance using the current iterations”. No one is forcing you/them to use UE5. UE4.27 exists for a reason. UE5 is a work in progress. If you want the engine to be mostly complete, wait until UE 5.20 before you use the engine. It’ll likely be ready in 2026.
Also, I just converted my entire level to using Nanite and although it did drop my FPS from 90 down to 45 immediately after I made the switch,I switched back to regular Shadow maps, waited a few minutes and the performance picked up. Then I restarted the editor and my FPS was back at 90. This was using UE 5.3.
Like I said, try switching back to Shadow Maps instead of Virtual Shadow maps. I’ll bet this improves your performance.
wow. this aged well.
so. ive converted everything i can into nanite because im amateur and indi and like lumen i like it to just take care of all that for me, i can use 1k,2k,8 assets, meshes,landscapes, materials etc
and nanite will discard all lods and make it auto scale in editor for best peformance. play the game baked/stand alone and it just works. what more could you ask for. if you did it manually and had options screen to adjust stuff manually (typical game) they would ask for an auto any way or some radeon / nvidia (dynamic resolution integration) . combine this with FSR and you can literally do it all on the fly if FPS is an issue.
tsr / taa are on par. TSR with the slight advantage because its just broad method that works on most hardware and produces excellent results. the blur can be reduced, esp motion blur/bloom etc.
but in motion graphics. you can test with without vsync+freesync. (you need both for both ends of the tear) i doubt you can notice the issue. but thats not saying its there.
what is there is like a 0.2 second blur when you go from moving to not moving the camera.
yeah its annoying and you cant unsee it once you see it.
anyone got a fix? lol. you can mask it if you like depth of field lol
Hello everyone,
I hope I’m not going to get into trouble for this, but I think it’s important to address the following. This discussion has been ongoing for almost a year now and hasn’t made significant progress. When I first read TheKJ’s original post, I had the impression that he might not fully understand the topic. I tried to remain objective and review the arguments, even though I am only an amateur with Unreal Engine and lack professional experience.
After reading through the responses in this forum, it became clear to me that other members share similar concerns. TheKJ has admitted himself that he is not a developer but just a user. This means he sees the end product but might not fully grasp the technical details behind it. This is evident in his testing approach, which does not reflect realistic scenarios and thus has limited practical value.
Many members have tried to explain to TheKJ that Nanite serves a different purpose than he assumes, or that Epic Games is not misleading its customers by supporting TAA or DLSS. Despite numerous explanations and attempts to clarify, TheKJ seems to stick to his initial, and sometimes incorrect, claims. This has led to repeated discussions that do not seem to be advancing.
I understand that this is a public forum and allows for diverse opinions. However, after a year of the same discussion without significant willingness to learn or improve, I wonder if it might be time to reconsider continuing this thread and possibly take measures to encourage more constructive dialogue within the forum.
I look forward to hearing your thoughts on this and hope for a productive resolution.
(post deleted by author)
in fairness… im having trouble getting a scene with nothing but a volumetric sky in it running below 90 fps at 1920x1080 upscaled to 4k. on an rtx 3090… Which means my actual scene even with most settings at medium or low… cant even break 50 fps…
This happens even when i disable virtual shadow maps… In fact its worse without em.
My draw calls are extremely low…
Not much more optimising i can do… my lighting needs to be dynamic. UE5 is a real struggle if your trying to release a game to the general public that dont use super computers.
Part of it is due to run time virtual textures… which i was told was a performance saver… but in reality massively kills fps.
You claim to have offered constructive solutions. I reviewed your posts on this. One of your statements is that better LOD algorithms are needed. This is as general as saying that “better algorithms for Nanite” are needed. Better solutions are, of course, always desirable. However, you provide no evidence on how these “better” algorithms would actually improve performance or visual quality. There are no detailed explanations of how these algorithms work, making it difficult for us to understand or evaluate your suggestions technically. Concrete implementation proposals or practical applications are also missing. So, your statements remain vague and unconvincing. It’s not enough to simply assert that algorithm XYZ is better.
Regarding your claim that this forum post is the “most detailed performance test” available, I am highly skeptical. Essentially, all you did was open the editor, place some meshes, and activate Nanite. There are numerous examples online where users have achieved significantly higher FPS, sometimes up to 100% more, by activating Nanite. However, I’m unsure if this truly reflects Nanite’s actual performance. These test scenarios are far removed from real-world games, which are fully compiled and not run directly from the editor. As far as I know, measuring performance in the editor is intended to identify and optimize performance issues, not to obtain precise performance data for real applications. The actual performance in the editor will never fully correspond to real-world performance.
Moreover, your tone seems more like that of a disgruntled customer rather than a professional developer. If you truly want to effect change, it would be advisable to work on the way you present yourself so that your arguments can be taken seriously.
Lastly, I want to address the discussion around TAA. I have my own reasons for avoiding TAA, which is, of course, fine. However, I don’t understand why this discussion is part of this post. The title of the post clearly suggests that it is about Nanite, LODs, and their performance efficiency. So why are other topics being discussed that do not relate to the title? It seems to me that this post exists only for a frustrated user to vent their anger at Epic.
Is all of this directed at the original poster? Unclear.
Anyway, I have a TON of experience comparing Nanite with LODs in gameplay situations.
Nanite is NOT good for meshes that have lower triangle counts than about 100,000.
HOWEVER, Nanite is ALWAYS going to be better than not having LODs, or having poorly optimized LODs.
Nanite is MUCH better than LODs for complex moving objects (like hi-res vehicles).
Nanite is best used in OPEN WORLDS. If you’re in a small world, your camera will never be far enough to trigger Nanite to adjust mesh complexity.
LODs are the absolute best at lower-complexity meshes (<100,000), and for cramped or smaller areas.
LODs are generally great for most situations. The main draw of Nanite is that it can SAVE TIME. But, at the moment, in my opinion, most hardware is not strong enough to handle large amounts of meshes that Nanite would simplify.
In a few years, yes, but now, no. So, unless an object has more than 100,000 triangles, I would avoid Nanite… assuming your world is quite dense. If it isn’t dense, then don’t waste time with LODs. If you’re using simple models, do NOT use Nanite, and utilize Unreal’s default LOD setup.
My last post was directed at the author of this thread, but the points I mentioned apply to all of us, including myself. Especially the comments about constructive contributions. I believe it’s in everyone’s best interest if posts provide answers to questions, encourage innovation and creativity, or simply address known issues and bugs. This is exactly what’s missing from this thread. I understand that there have been some reasonable and constructive responses here, but the title speaks for itself.
I fully agree with your statements regarding LODs and Nanite; they align with my tests and understanding of these technologies. Much of it also matches what Epic Games has stated, even if their marketing heavily emphasizes Nanite. I think both Nanite and LODs have their place. Neither technology is inherently bad; it all depends on when and how they are used. Honestly, I believe this could be a good final word to conclude this thread, whose existence I find quite questionable. I doubt that anyone searching for an answer will find it here.
Got it.
Normally I try to catch up on a thread, but there are so many posts, I was hoping you’d clear it up for me.
You did.
Since everyone needs a reminder of what we achieved years ago but unreal refused to catch up.
-
Ordered dithering for LOD transitions since this algorithm doesn’t animate in motion. This would still be compatible with crappy TAA everyone loves.
-
A better algorithm (or workflow) would be something comparable to the industry standard: Simplygon. A quick and efficient system that can optimize micro detail as a depth texture for effects like SS shadows.. Then a step in reducing overdraw by converging or removing thin details that will decimate due to linear sampling fall off. Another aspect UNREAL’S LOD algorithms misses is the ability to enforce pro performance topology for triangle arrangement.
-
Drawcalls? Precomputed(Lod swapping, culling) meshlets made with local static(optimized) objects.
-
Deferred texturing for foliage since Nanite hates WPO.
Nanite is NOT good for meshes that have lower triangle counts than about 100,000.
Nanite is MUCH better than LODs for complex moving objects (like hi-res vehicles).
Nanite is best used in OPEN WORLDS. If you’re in a small world, your camera will never be far enough to trigger Nanite to adjust mesh complexity.
LODs are the absolute best at lower-complexity meshes (<100,000), and for cramped or smaller areas.
There are numerous examples online where users have achieved significantly higher FPS, sometimes up to 100% more, by activating Nanite. However, I’m unsure if this truly reflects Nanite’s actual performance
0% understanding of overdraw. Test included pre-existing problems for non-nanite meshes that are not present in OPTIMIZED games. Here’s a quick clip explaining overdraw.
Performance is NOT affected by poly count, its surface area of the overdraw visible on screen.
What I mean by surface area is explain in this video chapter. But in that scenario it’s (surface area on frame) x (material cost) where as a more specific determination of performance for objects is (surface area on frame) x (material cost) x (related overdraw).
This “poly threshold of what’s good for nanite” is completely ridiculous. That’s why I exported a 6 million poly mesh(an optimized game scene with little overdraw) into unreal and then enabled nanite. NANITE was 30% slower in a SIMPLE scene. But in a scenario where we let free with billions of triangles with nanite, we are met with several visual problems only blurry(bye bye relevant gameplay detail & detail costing 30%+ more perf) or expensive temporal crap can mitigate.
Epic has enough experience with graphics to know this. They just don’t CARE about crappy TAA. They brag about it and will continue to abuse it
It is only recently where I have seen some possible improvement with nanite-shadow map compatibility.