They’re not talking about Unreal having snow deformation. They had that in Unreal 3 even. What they’re saying is that NANITE landscape geometry now supports deformation which is a very different beast.
Developers have access to the source code for the engine, so if they’re concerned, they can choose to modify it themselves. Again, any developer choosing to ship a game with an incomplete engine is doing so at their own risk, and they are fully aware of said risks.
You don’t need a hardware profiler to check your stats. Just use the tools that come with the engine like you’re doing in PIE. When you package the game (choose developer as the option instead of shipping). You’re also running the engine while PIE or Stand Alone which does affect performance. Packaged games run way faster than looking at stats in engine.
We don’t need to prove you wrong. You can choose not to believe us if you want. That’s your prerogative.
Why do you keep posting similar stats saying that Nanite is slower than LODs? We got the point already. You don’t have to keep trying to prove that you’re right. lol
Why do you keep posting similar stats saying that Nanite is slower than LODs? We got the point already. You don’t have to keep trying to prove that you’re right. lol
Think of it as a compilation to a bigger project I’m working on.
They had that in Unreal 3 even. What they’re saying is that NANITE landscape geometry now supports deformation which is a very different beast.
I’m saying they got a basic feature working on something that is broken imo.
Developers have access to the source code for the engine, so if they’re concerned, they can choose to modify it themselves.
This is something I have addressed before, if a studio decides to use unreal, it means they lack the funding and/or motivation to use a proprietary engine. Studios are flocking to this engine for that reason. So the technology shouldn’t be given if it’s not ready. Studios will very rarely modify the engine. This assumption is one of the biggest issues with Epic Games and their stance on poor performance across third party uses. I’m not saying it’s all Epic, but it’s a good 50% their fault when it comes to UE5 releases.
Anyways, I also try to limit my test from PIE. It’s just with nanite, it’s a lot more complicated with off on comparisons. Also I can’t profile released games soo…
Yeah, like I said, my main concern is with other studios and Epic doesn’t understand how impactful their choices are on consumers.
Weird experience, opened this project with a fresh install. Took out all lights except directional and point light, everything on low except shadows where on high.
I “enable” Nanite on all the static meshes and end up with 3.6ms
I say enable, because look at the top, I have nanite overview on and nothing is showing
That happens when the visibility buffer and other Nanite systems aren’t running.
You do know there’s a difference between broken and incomplete right? UE5 is incomplete. The engine isn’t finished. Epic themselves always suggests not to ship a game with Beta functionality. The entire engine itself is in Beta right now. Wait for full release and then see what the performance is like.
Besides, it’s not broken. In previous versions of the engine you couldn’t render more than 2 million polys on screen. Nanite fixes this limitation by ensuring that it’s only ever rendering 2 million at once even if there’s actually billions or trillions of polys on screen. Just because it’s not working the way you thought it would, doesn’t mean it’s broken. You want proof of this? Load up UE4.27, and import ten 100-million poly meshes and compare that performance to the same scene in UE5 using nanite.
Comparing something to the worst scenario will always make another option appear better.
Same thing is happening with DLSS and “Native”, DLSS is only better than Native with TAA which is horribly implemented in every game. This is another thing I’m releasing detailed documentation on away from this site.
In previous versions of the engine you couldn’t render more than 2 million polys on screen.
I can rendering 12 million on screen. I just recently loaded a 6 million tris scene without nanite and the performance got worse with Nanite. Again, research overdraw. Another problem with Nanite is when you reach the threshold of poly count of returns, you are required to use two other extremely expensive things being VSM’s and TSR. If not TSR, the 10x cheaper TAA which is going to make all the detail pointless in motion. And another problem with nanite is it doesn’t do anything for subpixel detail vs mipmaps and LODS. It only takes a few meters away from a “appropriately dense” nanite mesh to break down into insane noise only intense frame smearing can fix. This can ofc also happen with really dense non-nanite meshes.
There is no logical reason to use nanite other than funding or virtual production/greens screen etc. The only time I think Nanite would make sense is on thin meshes.
You do know there’s a difference between broken and incomplete right?
My definition of broken means UNFIT for game development. This is what I’m proving on this exact thread, for any developer to review.
If you can’t hit 60fps at 1080p with Nanite enabled, you’re doing something wrong, buddy. I’ve got a fairly dense forest scene and it’s just shy of 60fps running on a 5700xt (this is a GPU with an architecture a generation older than current gen consoles) with a 1080p render target in the least performant area. If I disable Lumen it jumps up to ~90fps. Lumen is the performance killer in UE 5, not Nanite.
This is quite possibly the dumbest possible way to try to prove your point. You’re taking the final submitted frame data, with no knowledge of what the original assets are or what kind of rendering optimizations this game’s engine are doing to that data before submitting it to the GPU, and then sticking it in Unreal (also, as far as I can tell, you’re not putting the camera in the same place relative to the scene as wherever it was when you captured it. Also everything appears to be one material, which is obviously not how it rendered in the original. Also also your reconstruction is clearly non-manifold, implying some other advanced stuff going on under the hood in that engine).
If I dumped a Nanite frame and reconstructed it and put it back into Unreal again, guess what? It will render faster! Y’know why? Because the raster optimization that Nanite does dynamically is now precalculated and static. Rendering the whole test meaningless.
If I disable Lumen it jumps up to ~90fps. Lumen is the performance killer in UE 5, not Nanite.
Go tell remnant II players that. The difference between Lumen and Nanite is that Lumen has massive visual value over poor lighting alternatives in UE, “appropriately dense” nanite mesh to break down into insane noise only intense frame smearing can fix. At least we can turn off Lumen. I have already stated the Nanite requires multiple pieces of expensive software to go along with it.
As for your scene, I have zero context about whether or not your scene is small or how it looks. Shadow method, AA methed. I have stated I can get 60fps in city sample at native 1080p and FN with Lumen&Nanite, doesn’t mean looks good or the performance justifies the visuals.
This is quite possibly the dumbest possible way to try to prove your point. You’re taking the final submitted frame data, with no knowledge of what the original assets are or what kind of rendering optimizations this game’s engine are doing to that data before submitting it to the GPU
Okay, you obviously have zero idea how this was done nor my point about polycount vs overdraw which is why you decided to mention material. I know everything about how the frame was generated, buffer by buffer, microsecond from microsecond, dx11 call to dx11 call. I synchronized the unreal camera position with the one in the game, hence the low about of overdraw in the attached post.
If I dumped a Nanite frame and reconstructed it and put it back into Unreal again, guess what? It will render faster!
No, if you tried to hardware/API dump a frame from a Nanite scene, It freeze your computer for several minutes, crash, and create a corrupted 11GB dx12 profile file, for a single frame. For reference, a regular frame was captured under a minute with a 2-GB file. I choose this game since it’s much more realistic than most 9thgen titles.|
Epic’s decision to invest in Nanite over better LOD algorithms and transitions effects is directly causes negative effects in the game industry. I’m not doing anything wrong when I boot up a UE5 game and every last one performs the same garbage framerate. Epics “you should customize our engine” defense is poor since that’s exactly why people and major AAA studios use unreal.
Here is the ms timing on an even bigger scale test.
The mesh is 4300-5000 tris(same mesh as before). 10,000+instances.
With 10 minute made LODs
Look at the ms render time with LODS.
Honestly, I think @TheKJ just wants to vent. I’m not going to be replying after this post and I’d suggest you all do the same. If they’re just arguing with anyone who counters their reasoning, then there’s no point. They’re not here to seek help, so let them keep shouting into the void. Eventually, once UE5 is completed they’ll stop complaining. It’s like people arguing with Ubisoft when they said that 30fps is actually fine for single player games. They’ve only been doing this for 38 years and develop their own engines, but gamers want to be right no matter what sometimes. If that’s the case, you can’t help them.
You took a RenderDoc/Pix capture, saved out the csvs for the vertex data from each draw call and used a script to rebuild it in Blender or some other DCC. It’s not difficult. People have been using this process to rip assets from games for years.
My point is that the point of the render pipeline where it’s making draw calls comes AFTER a bunch of other stuff that engines do (like culling and LOD selection). You’re cutting out a significant percentage of the draw thread overhead when Nanite is disabled because it’s already been calculated for you.
And you very obviously have NOT aligned the camera properly because half the geometry in the scene is missing, many of the normals are inverted, and the more detailed geo is on the sides of the buildings facing AWAY from the camera. You’ve either misaligned the camera or done an absolutely terrible job reconstructing the scene, either of which render your test invalid.
And I (and many others) HAVE done RenderDoc dumps of Nanite frames. If it’s freezing your PC for several minutes and crashing, then I can only assume that, once again, you’re doing something wrong.
It’s not about polycount. I just proved that (and again several times before).
I WOULD LOVE, to see a scene where nanite makes sense, but you gave us nothing to us or other readers that might come across this thread.
Nanite is the biggest garbage I ever had to try to implement. Just enabling it clean cuts 30% performance off (also seen in the profiler, for WHAT?! theres 0 meshes with nanite enabled!).
Furthermore, after converting all meshes, I lost another 20 or so %, ending up with pretty much exactly half the performance I had before when I used auto generated LODs with custom screensize settings.
I’m more or less convinced that this is not an engine for games anymore … maybe I’ll go back to Ue4 if I want to make a game.
I’m more or less convinced that this is not an engine for games anymore
Just don’t use UE5 specific technologies. I’ve seen a lot of porting test from UE4 to UE5(meaning no other changes to settings just engine, no Lumen enabled or Nanite conversion) and they showed performance boost with UE5.
It’s just that Nanite(and it’s VSMs), TSR and Lumen(if your game isn’t 100% static) are incredibly wasteful for performance.
Mixed feeling on Lumen, it would be fine if it had more caching a baking but all other lighting solutions are terrible in UE if you have basic interactions and realism in mind.
Not blurry but they are low res, since these pics were intended to be an example of the kind of games and scenes Nanite could provide.
How many tris can a scene like that show on screen ? I doubt we can have the kind of revolution in graphics the Crysis offered way back in 2007, without heavy innovations like Lumen and Nanite.
Back in the day Crysis/Cryengine offered for the first time AO in real time, even though it was screen-space, fake sss for the leaves and vegetation, Tessendorf like ocean waves and many other things.
I doubt we can have the kind of revolution in graphics the Crysis offered way back in 2007, without heavy innovations like Lumen and Nanite.
I just said deferred texturing, refined DFAO, SVOGI. Nanite does not help visuals or performance with foliage unless the foliage was completely screwed up in terms of real optimization. If these were reference photos, they could be done with good performance with dynamic lights much better without nanite.
What’s acceptable performance for you? Because i’m getting 15 fps on a 3070 in 66% of 1440p, without lumen, in an empty scene where there is just landscape present with nanite and tesselation applied. No trees no anything. It does not look acceptable for me. This won’t be acceptable for generations in the future even if performance demands won’t change. But they will. The ones who’re comfortable playing 60 fps now will upgrade to 144, and 144 to 240 e.t.c. And that does not look achievable even in like 6-7 generations of gpu’s
I was saying, asking, how do we move to the next generation of graphics without a paradigm shift ?
RTELOS: I’m guessing I’m in support of what this thread says, in general. If you have a somewhat lower poly game, or if you want to make a game with “current gen” graphics, then probably nanite is detrimental for that.
But for me Nanite seem revolutionary, magical almost. The power to push as much polygons as you want with a fixed cost. In my case - be it off line rendering or real-time games, the limits of polygons was ever present. Either limiting how many objects you have on screen, or limiting the quality, or having to accept visible transitions from higher poly to LOD.