NEW(3/22/2024) UE5.5+ feedback: Please Invest In Actual PERFORMANCE Innovations Beyond Frame Smearing For Actual GAMES.

First of all, thanks for the Cvars you posted, those are great! I’m going to share my findings in the next days

No problem!


You’re forgetting that LOD’s aren’t just about Geometry. LOD’s also include optimization to materials and textures. You can make the argument about virtual textures, sure. But LOD’s are still important to resolving contention issues with the rasterizer. (AKA, overdraw)

I should be able to reduce overdraw drastically but really don’t have much options outside of r.Nanite.MaxPixelsPerEdge. This just makes the scene look ugly if you adjust the values too high.

So Material LOD’s I feel are important as well as the ability to adjust per mesh how aggressively they scale at a distance.

In one of your nanite scenes, set r.Nanite.MaxPixelsPerEdge to 32 and observe how your geometry appears in the level. You can see that it looks like a very low level LOD if your objects don’t completely disappear. This can be more useful if we could control this per mesh.

That could be automated by Nanite, I don’t see why you need to create LODs specifically for that. Besides, material complexity is nothing compared to Lumen and VSM.
I’m helping develop a plugin that swaps all the meshes in the scene with a custom one (to see the Mipmap level used) and the difference is just a few frames (117 to 120).

That makes more sense, as it’s a more granular control of a mesh rather than having to create an additional model. It would be similar to changing the “WPO Disable Distance”.
Besides, each mesh has already specific values to change when you create the Nanite mesh (like Preserve Area or Explicit Tangent.
Perhaps some of those Settings could be change to have the r.Nanite.MaxPixelsPerEdge looking better already? Like Keep Triangle Percentage or Trim Relative Error.
I can see Epic devs adding even more control to that.

1 Like

That could be automated by Nanite, I don’t see why you need to create LODs specifically for that. Besides, material complexity is nothing compared to Lumen and VSM.
I’m helping develop a plugin that swaps all the meshes in the scene with a custom one (to see the Mipmap level used) and the difference is just a few frames (117 to 120).

You can’t really say how a scene will perform. At the end of the day, it’s up to whoever is optimizing the level to figure out the best tricks to maximize performance. Given new options in Nanite, different teams can figure out ways to maximize performance for their games.

I am hard pressed on maximizing performance of my nanite scenes in native resolution before using upscalers so that they can provide more than just getting above that 60FPS threshold. Before, with TSR at 75%, i wasn’t hitting 60 in my scene. Now, I’m pushing closer to 90 FPS which is miles ahead of the original.

I just want things to work on a wider range of hardware and to give developers more options to optimize performance.

You may be fine with how performance is. But this can drastically change depending on the type of game you’re building. Mine is bit more complex and has a wide variety of different worlds which makes it harder to optimize using Nanite in its current state.

2 Likes

I’m always looking to improve performance of my game so I’m always looking for new ways of doing so. Nanite does already a great job and I see a room for improvement.
I’m just focusing on what I can do today with the tools I have rather stress on what it might be available tomorrow. In fact, I’ve spent some time on stuff that eventually got improved with an update.
So, I just would focus on creating and just use the standard optimization techniques. It’s easy to procrastinate by trying to optimizing too much.
Upscalers are a tool no different than LODs or Mipmaps, even AAA titles with decades of experience use them, no shame on doing so yourself.
They are not an excuse to slack on optimization, just a bridge between pushing new visual levels and having to deal with current hardware. The latter will get replaced, but your game will hardly get a visual overhaul, especially for level design.
Think in terms of hardware when your game is supposed to be released, not what’s available today.

I’m developing open world VR and I literally immediately disable Nanite, Lumen and VSM’s for everything I install or add to my project. Using the old school methods for lighting, meshes, shadows works the best for keeping performance where it should be…above 90fps…for VR. If and when Nanite, Lumen and VSM works efficiently for VR…then I’ll consider using those…until then…they are a no go. Even with a top 3090 and a 4090 system.

Epic definitely needs to take optimization of their current features very seriously…rather than continually doing something new…that never really gets game friendly!!!

1 Like

Upscalers are a tool no different than LODs or Mipmaps, even AAA titles with decades of experience use them, no shame on doing so yourself.
Think in terms of hardware when your game is supposed to be released, not what’s available today.

EVERYTHING said above is the exact thing this thread STANDS AGAINST.

Extremely respected studios (Especially the Sony owned) DO NOT rely on upscales.
In fact, the most HATED studios lastly have relied on upscalers.

Temporal AA and upscalers look like absolute trash during gameplay and customers HATE them when they are forced as a requirement

Having “amazing” visuals only for it to be unperformant and require an ugly upscaler destroys the very “detail” in gameplay/motion.
This is exact psychotic oxymoron game development view of UE need eliminated from the UE5 workflow.

That last thing you said is unbelievable disrespectful to consumers.
WE HAVE the hardware. It’s the fact that the hardware you think is good enough is not going to regular people.
Games that rely on upscalers promote an elitist market.

THIS THREAD STANDS AGAINST UPSCALERS and TEMPORAL EFFECTS.

WE WANT COMMON SENSE UPDATES THAT ARE PRO REAL PERFORMANCE.

Examples being:

  • Better and fast intelligent LODs algorithms that would actually benefit performance for consumers. RIVAL the temptation of Nanite. Because Nanite is NOT fit for consumers.
    Most studios, even AAA studios, will not invest in LOD and base-mesh optimizer artist. LODs are poor made nowadays with crude, deforming LOD algorithms. Help gamers by helping the studios using your product.

  • MUCH better caching on static objects both in the Lumen and VSM department. VSM’s should not cost 2.5ms on a massive environment that is completely static.
    Either fix the CSM/DFM workflow by fixing the visual issues of CSMs or add more micro optimization to the VSM code.

  • STOP obsessing over TAA and TSR. They look like garbage during gameplay unless at elitist resolutions. All temporal upscalers and AA methods undo the whole point of visual breakthroughs like amazing FX’s, textures details. STOP making shaders depend on TAA like contact shadows, hair, Lumen, soft shadows, DOF. THIS IS RUINING visuals for gamers You haven’t even updated TAA. You gave devs another BLURRY, insanely expensive upscaler.

The more popular UE becomes, the more Epic needs to focus on creating an EXCELLENT base engine for studio or gamers are going to suffer because of TAA and upscaler dependance.

I as a gamer have already been affected by the ridiculous TAA dependency in Unreal.
It’s disgusting and ruins games.
I have worked with SEVERAL engines. NONE depend on TAA or limit developers to TAA dependant shaders like Unreal does so aggressively.
You’re forcing blurry gameplay to become the “new standard”.

THIS IS SERIOUS. WE WANT CHANGES in UE5.4+
Not for the sake of my studio. For the sake other LAZY studio that provide exclusive content to gamers with excellent hardware within their price reach.
And even console players. Gamers deserve BETTER after spending $400+ in this economy.

1 Like

Thinking in terms of hardware in the future is pretty silly. I did that before and only found that people who want to play my game are still using 10 and 20 series GPU’s.

Scalability is important. This is nonexistent in current Nanite. Upscaling is an important tool to support a wide range of PC hardware. Upscaling with no uplift is useless. This is a feedback thread so I will provide my feedback on the current state of Nanite.


I agree that upscalers are a major W for game developers and gamers. Way way back in the day, we used to decrease the resolution of the game with no upscaling to improve performance. Now we can decrease while maintaining some level of detail. However, upscalers are not a replacement for conventional contention mitigation techniques. Until hardware can solve the multithreading synchronization issue’s, we have to bank on the tried and true.

I’ve been in this industry since ~'07 and have seen how rendering hardware makes it possible to perform more calculations, but waiting on threads to synchronize has always been an issue that traditional optimization techniques aimed to mitigate as to not bottleneck the hardware. I have experience writing complex renderers with OpenGL and DirectX


I’ve already been able to optimize Lumen and VSM’s. It’s Nanite that is causing me most of the trouble right now with its lack of scalability & optimization options. Epic also tells you that Nanite should pretty much be enabled on every static mesh possible.


I agree with Epic on the fact that Nanite typically renders faster. But they neglect to tell you that you have to worry about the contention issues in the rasterization process with no ability to mitigate them currently.


Here is proper demonstration of contention issues

Details:

Resolution: 1440p @ 100% screen resolution with TSR disabled. TAA instead
GPU: RTX 3090ti 24GB
CPU: AMD Ryzen 9 5950X
Lumen: Disabled
VSM’s: Disabled
Reflections: Disabled
Shadows: Disabled

In this scene, I stacked 1000 planes on top of each other with a simple masked material. This is a completely fresh project in the Launcher version of UE 5.3. I am running P.I.E. I know it's a very ugly image, but it's meant to show you what I see in the viewport.

Here is how the mesh was created in 3ds max. Just a simple plane that is heavily subdivided:

image

As you can see, standard culling is equipped to handle overdraw like this far better than what Nanite can do currently.

For demonstration on what the performance would be like without occlusion & scene culling + nanite disabled, here ya go:

You can see the jump from ~5ms to ~18.5 ms. Nanite is faster than non nanite + no occlusion culling in this demonstration.

Nanite has a lot of area’s to improve and it comes with massive overhead.

World Outliner for reference:


Very soon, I’m going to be doing a public release so players can report performance.

4 Likes

I too even had one simple test where nanite performance faster than the LODs.
BUT the total scene ms cost went higher by 1.5ms(again this was a small test) because other features like Lumen and VSM hated dealing with the Nanite complexity.

This is why I feel like Nanite should be reserved for baked lighting only. But even then. If baked lighting was the solution to Nanite and bad LODs, then remnant 2 wouldn’t look like absolute dogcrap, running like dogcrap.

I know their was a CPU bottleneck in that game but the resolution is still lowered by half without even letting the player know.

I agree with Epic on the fact that Nanite typically renders faster. But they neglect to tell you that you have to worry about the contention issues in the rasterization process with no ability to mitigate them currently.

They need to fix documentation snippet ASAP before more studio ruin their games performance.

I was talking about the HIGH-END graphics that should be geared toward the HIGH-END cards available when your game is supposed to launch. If you want native 120fps @ 4k on Ultra then you should have the top-of-the-line card.
Of course people will still use 10 and 20 series, but they can’t really complain if their card is like 7-8 years old at that point. As of now, my minimum requirement is a GTX 1080 and my game is likely done in 2025, making that card 9 years old!! Yeah, I think I’ll be good.
Using the Steam stats about GPU, 1/3 of gamers are using at least a 2060 today, so imagine that projected in two years.

I agree, it should be an entry about Nanite in BaseScalability.ini
Perhaps that can be added manually?

Alpha masks not being performing in Nanite is a known issue. In fact, you’d be far better off having a complex geo to cut out those layers than using alpha in the first place.
I’m confident that will be solved in the near future, so I’m not too concerned nor I’m changing my level design around it.

1 Like

Well yea. HIGH-END hardware for high end graphics quality I agree with. Scalability is the most important thing for me.

If you’re on a 10 series GPU, I should be able to include settings for you to downgrade Nanite that doesn’t effect everything. Like for foliage, r.nanite.MaxPixelsPerEdge works great. But if you set it too high, you’ll notice that the City Sample vehicles start to really break at a close distance while trees look like normal LOD’s.

I’m sure they’ll figure it out.

1 Like

Definitely! That’s why I suggested to try and see if adding a Nanite section in BaseScalability.ini could have different value for different Scalability Settings.

Also, is there a way to override that file with something that’s per-project instead of per-engine version? It makes no sense that you’d apply the same settings for all the projects you have using that version of the engine.
I tried to put a BaseScalability.ini file in the Config folder of my project but the one in the editor has priority.

If you want to override the BaseScalability.ini file in your project. Copy the file to your project’s config folder AND RENAME THE FILE.

YOU MUST RENAME THE FILE TO “DefaultScalability.ini”

image

And you’re good to go!

Awesome, exactly what I needed!

Also, I found this on the official roadmap: https://portal.productboard.com/epicgames/1-unreal-engine-public-roadmap/c/1250-nanite-optimized-shading

2 Likes

i am no shure. But empty draws should be fixed in 5.3

if you have problem with lots of objects in the scene, important is instancing and reduced primitives. Use packed level actor, its basically blueprint with auto packer to ism and hism component. Its only for static meshes.

be carefull. level instance is not same as packed level actor. it basically same result as you placed actors manually to the level

another thing is vst - can cost cpu too.

“nanite is for lazy developers”
yeah. i heard that few times from guys with limitited capacity in their head :stuck_out_tongue_winking_eye:

you can check profilegpu, how much is cost for nanite gpu culling (cull.rasterize or something like that). basically when you will have full scene of nanite, you will get no benefits from classic occlusion queries

even if you have low poly, for example room with 6 different pillars, should be much better, because those parts are divided to clusters, which is great. and you are not bound by old overdrav issues like few vertex per pixels.

and my experiennce is like 5 - 10 more times less polygons , that is rendered with nanite then with classic lods and impostors + huge reduction of drawcalls, if you know how this system is working. Best is with VST - bigger and less textures, less materials. (large scenes that can be compared with matrix or fortnite in terms of no of assets or materials)

1 Like

You’re being extremely rude and describing yourself. People who fail to understand Nanite hurts performance are the one with the “limited capacity” to accept multiply test results.
Slapping on Nanite has worse performance vs taking the time to optimize.
Which one sounds like the lazy alternative?

That’s why The first descendants and Tekken 8 uses LODs. (The former has Lumen and performs miles ahead of City Sample without AI and Fortnite)

You can either make good materials, have poly budget with LODs,and optimize your draw calls or you can slap on Nanite and blurry upscalers like DLSS.

Nanite hurts gamers with regulars hardware and Epic is trying to force a Nanite and TAA workflow on devs.

of course :grinning::grinning:

ps: for example electric sheep is slow and i get it. they use different precision on models, put there thousand of instances for small areas + whole project is on substrate and they testing how far they can go and what engine can handle. point is - profiling without knowing the scene or project bugdet etc and make assumptions is dumbest aproach that i saw :stuck_out_tongue: but not first time :grinning:

1 Like

The Electric dreams project shows their lack of care for regulars gamers without 4090’s.
I tested it for myself and profiled that monster but all the games/projects here show what’s up with Nanite and UE5 in general.

and last thing - vsm is working with cluster culling

good luck

Very insightful posts, solid snake. Unfortunately, very small amount of game developers care about the performance. Especially, with the additions of DLSS and FSR it became a standard to use that tech. No game released with UE5 can be described as ground breaking. None of them have graphics that will blow anyones mind, they still look like games. But the problem is that with very powerful GPUs we have got today and no difference between visual quality of the games from 5-6 years ago, todays games are very resource heavy.
And i will agree with you that manual work with lighting, lods will always be better way of making optimised games than just slapping nanites and lumen, especially for “lower end” gpus. Lower end meaning no current gen, even though they were capable of running beautiful games with no issues just a couple of years ago.

very small amount of game developers care about the performance. Especially, with the additions of DLSS and FSR it became a standard to use that tech

Sadly, it’s the mostly developers with tons of funding that don’t care.
Indie games who are usually directed and produced by extremely passionate people think about performance but lack the budget to invest on tools that would help performance.

No game released with UE5 can be described as ground breaking

The LLN UE5 reveal and Matrix demo we’re the most mind blowing. But both ran at 30fps and heavily upscaled. Upscaling looks the best when to target is 4k output but both were at 30fps which is ridiculous for our hardware, next gen consoles and most importantly gameplay smoothness.

Gamers are being forced to achieve the basic smoothness of 60fps with blurry upscaler (And yes that includes DLSS)
You have a bunch of people saying DLSS looks better than Native when they compare DLSS to Native with TAA which looks horrible. People keep making the worst possible scenario reference now.
Kinda like FN being Epic’s reference for UE. It’s a bad example since most games have a static environment so we have a lack of static-object optimization tools.

Post #4–Why 30fps is not acceptable and where FN’s 6 billion revenue should go.

Epic needs to invest in easy performance enhancing workflows for developers that way consumers/gamers can benefit from studios using Unreal.
We are at the point where people become disgusted with Studios after they announce they are using UE because of how unnecessarily unperformant and how Temporal dependent it is.

It’s their ENGINE and at least 4 billion to use to enhance it for the good of games.
But unless this hits 1# votes, we won’t get that from Epic.
And I’m not just some dude whining about performance.
I have given advice over and over on this forum site.
Idea’s that would attract developers and help consumers.

But sadly, because this seems like such a niche concern among developers. I’ll have to amass funding from a starting amount of $0 to get this engine fix and then offer the my version.

Take a look at this:

Post #3–AI workflow for UE5 for optimizing static meshes.