After getting into the final stages of optimization for my game, I am happy to report that I was able to get a stable ~94-96FPS at native 1440p in the editor with TAA in 5.4. Before, I was only able to get about ~72FPS at native 1440p in editor with UE 5.3.
My scenes are extremely heavy with nanite masked foliage (trees, grass, and other plant life). Optimization with nanite was a feat but wasn’t impossible. What this now shows me is that Nanite is more than capable of handling complexities IF you know how to use it properly. Nanite has so many cvars available that you really have to dive deep into figuring out what will and will not work.
If you want to know what these settings are, you can go here or type the commands into the console like this
r.Nanite.MaxPixelsPerEdge ?
r.Nanite.MaxPixelsPerEdge=4
r.Nanite.DicingRate=0.5 ; This seems to go hand and hand with the above CVAR as per my digging around in the engine source. Play with this value as well. Lowering it did seem to provide me with a boost in frame rates.
In my game, 2 is the lowest this setting can get without running into nanite memory issues due to the other settings below. I stick with 4 as the minimum to ensure there are no issues and to improve performance. (Note: DO NOT increase this value without monitoring the same setting on individual static meshes. This will cause nanite to agressively LOD all nanite geometry in the world. You can control the effect per static mesh as of 5.4)
r.Nanite.MinPixelsPerEdgeHW=8.0
Lowering this value improved my performance stats greatly. I am unsure why but I have speculations based on reading some of the source code. I’m no expert with Epic’s code so I won’t put those speculations here. Just play around with this value.
I lowered these by extreme amounts to be used with ‘r.Nanite.MaxPixelsPerEdge’ >= 4. Another developer by the name of ‘AirSickLowLander’ let me know that some of the defaults and more extreme values were causing EXTREME performance loss on low end GPU’s. This was unaccaptable to me. So I reduced these to the minimum. Use with caution as you could potentially have issues with nanite running over budget which stops triangles from being drawn at all. This is another reason why I increased my MaxPixelsPerEdge to 4
Here are the other settings. You can also try different values for these, but these worked best for me.
r.Nanite.PrimaryRaster.PixelsPerEdgeScaling=10.0
r.Nanite.ShadowRaster.PixelsPerEdgeScaling=10.0
r.Nanite.FastTileClear=1
r.Nanite.FastVisBufferClear=1 ; 2
r.Nanite.MaterialSortMode=2
; This one I'm 98% sure is about how you've built your scene. Look it up in the link above to confirm that.
r.Nanite.UseSceneInstanceHierarchy=1
r.Nanite.AsyncRasterization=1
r.Nanite.AsyncRasterization.ShadowDepths=1
r.Nanite.ShadeBinningMode=1 ; 2
r.Nanite.Bundle.Shading=1
r.Nanite.ImposterMaxPixels=5
r.Nanite.PrimShaderRasterization=1
r.Nanite.VSMMeshShaderRasterization=1
; I noticed a better boost when using these settings as well. I made assumptions about these and am still experimenting. But, I tried thinking about Nanite how I think about Virtual Textures which led me to these settings.
r.Nanite.Streaming.StreamingPoolSize=256
r.Nanite.Streaming.MaxPageInstallsPerFrame=4
r.Nanite.Streaming.MaxPendingPages=128
r.Nanite.CoarseMeshStreaming=1
r.Nanite.CoarseMeshStreamingMode=1
r.Nanite.CoarseStreamingMeshMemoryPoolSizeInMB=100
r.Nanite.ViewMeshLODBias.Min=0
; TSR
r.TSR.AsyncCompute=3
r.TSR.History.SampleCount=2
r.TSR.History.R11G11B10=1
r.TemporalAACurrentFrameWeight=0.1
; VIRTUAL TEXTURING:
r.VirtualTextures=True
r.VT.EnableAutoImport=False
r.VT.MaxUploadsPerFrame=4
r.VT.MaxUploadsPerFrameInEditor=4
r.VT.MaxContinuousUpdatesPerFrame=1
r.VT.MaxContinuousUpdatesPerFrameInEditor=1
r.VT.RVT.TileCountBias=-1
r.VT.PoolSizeScale=1.0
; Found these thanks to another dev: vfXander
AllowAsyncRenderThreadUpdates=1
AllowAsyncRenderThreadUpdatesDuringGamethreadUpdates=1
AllowAsyncRenderThreadUpdatesEditor=1
AllowAsyncRenderThreadUpdatesEditorGameWorld=1
; GREATLY IMPROVES MASKED FOLIAGE PERFORMANCE
r.EarlyZPass=2
r.EarlyZPassOnlyMaterialMasking=True
; These depend on the type of game and art style you're going for. This helped me squeeze more quality while also decreasing some of Lumens defaults.
r.GBufferFormat=3
r.DefaultBackBufferPixelFormat=4
r.PostProcessing.PropagateAlpha=0
I haven't experimented with these yet but I am using them. So again, use with caution or find values that work best for you. Thank AirSickLowLander for these as well.
r.MinScreenRadiusForDepthPrepass=0.300000
r.MinScreenRadiusForLights=0.100000
r.GenerateLandscapeGIData=False
r.VelocityOutputPass=0
This can be indeed fixed in 5.4 directly within a Static Mesh Detail panel. Look for "Max Edge Lenght Factor and push it to 1 (default is 0 for all meshes).
If you notice “spikes” coming from foliage, then disable “Use MikkTSpace Tangent Space” in Buld Settings underneath.
Not only are the clouds TAA dependant( and that’s ridiculous since they look worse than Decima’s implementation and look normal without TAA ), lowest settings it’s costing 8ms+ on LOWEST engine scalability with the actors default trace distance.
1080p on a 3060(13 taraflop)
Meanwhile PS4 HFW has this done looking better and more performant.
Complete disregard for top voted feedback.
Have you guys tried megalights ? It seems to address some of your concerns. There are a few drawbacks though (more blur than lumen and much more blur with moving actors) but it seems to be a good compromise overall.
After Epic’s long parades promoting Nanite as a faster pipeline, Nanite has been demystified for THOUSANDS of consumers AND developers thanks to the most in-depth video covering it to date by Threat Interactive
As studios progress in knowledge and needs for development, hardware tessellation is a HARDWARE feature that being completely wasted by UE5 and current games. If studios want to increase polygonal desisity for next generation detail while taking advantage of successful quad rendering, phong and PN tessellation is an easy trick that UE5 rips away.
Traditional tessellation is faster by allowing allows higher level of optimization and allows developers to not be STUCK with VSM rendering, or nanite’s x2 slower per pixel shading cost. There are developers who need plugin support, newer UE5 features but REFUSE to butcher performance with Nanite. So LODs and meshes take a polygonal look on high end hardware when they could have had access to traditional tessellation for a quick relief to geo detail.
And a bonus request, allow developers to enable modified nanite(reduced functions like streaming, compression etc) like rendering on LODs since distance LODs crush quad rendering and nanite software rendering is a faster in those cases.
What I read from that statement is that the amount of work needed to support all platforms with tessellation was way too much - and ongoing support of it.
Have you played with Virtual Heightfield Meshes? They perform nicely and give good resolution for things such as footprints and material detail - I used it in a 5.3 game level (Nanite/VSM btw) that was 2km x 2km and was getting around 100FPS from the populated level with lots of trees, roads, a town etc on a lower end 30 series card. Can’t say I’ve scaled it up from that size though.
You’re anger is NOT justified - you’re just frustrated that you haven’t been able to add all the cool things you want straight into a generic engine and are convincing yourself that it should be the job of others to provide you with exactly what you want. In other words, you’re having a hissy fit
If you’re really wanting to get performance from a landscape - you should probably look into converting it to meshes, which also opens the option to convert close sections to Nanite tessellation.
I’ve had a number of people reach out to me about optimizing LODs and reducing things like overdraw - they made the mistake of listening to your bleats - but they found the performance increased no end when they stopped clinging on to those ideas and fully embraced Nanite and the complete Nanite workflow.
you want straight into a generic engine and are convincing yourself that it should be the job of others to provide you with exactly what you want
It’s not what I want, It’s what 3rd party titles NEED and SEVERAL thousands of gamers and developers want to the same thing.
Have you played with Virtual Heightfield Meshes?
Irrelevant, several titles are trying to optimize meshes and need access to dynamic displacement, phong/PN tessellation.
work needed to support all platforms with tessellation was way too much - and ongoing support of it.
That’s their job and they did it for years. Games do not need to look like crap(no access to tesselation) or perform like crap(Nanite)
but they found the performance increased no end when they stopped clinging on to those ideas and fully embraced Nanite and the complete Nanite workflow.
Irrelevant and incoherent. Quad overdraw is a massive part of optimization and the engine should have alternative rendering methods that developers can use to relive it when it fails(thin objects, foliage) but nanite doesn’t work like the first concept and hates WPO.
What has been proven about Nanite is not a joke or debate. Not everyone wants to bloat their assets with millions of unneeded poly just to get reasonable culling or wants to invite extremely higher per pixel shading cost.
OP is becoming less and less credible.
Nanite is faster in 5.5 but i haven’t tried it myself and it require to tweak some settings possibly per mesh and divide too big assets to smaller pieces to work better.
Megalights is experimental but seems to be a really good compromise for visual quality vs performances.
TAA and TSR must be avoided but DLSS and XeSS (even running on AMD GPU) are good or if you want an upscaller not using temporal data you can use FSR 1.0 it’s better than FXAA/SMAA.
I can only agree that in their current state, TAA and TSR shouldn’t be used but most devs are using DLSS and XeSS plugins that gives very good results even in motion/transparency objects with close to no ghosting compared to TAA and TSR that have a lot of ghosting issues.
And it think that unless playing at 1080p (upscallers works very poorly at this resolution anyway) everybody should be using upscallers for 1440p and higher resolutions. If someone want zero ghosting for competitive games then he should use FSR with the temporal upscaling part disabled (i don’t know if we can do the same with DLSS and XeSS). For 1440p+ native rendering is a waste of resources that could be used for things that have more visual impact like textures, lighting and shadows budget.
The issue isn’t upcallers, the issue is people that think that they have to play at native 4K if they own a 4K display. To me : Textures resolution>lighting>shadows>rendering resolution
DLSS & XESS via circus method(or more technical known as increase temporal history size) look reasonable clear even at 1080p.
The problem is cost, possible Sony contract limitations regarding use of AI upscalers, some ghosting still remains, and we don’t need complicated AI solutions to achieve these reasonable results.
Why does circus method look do much better? Because these upscalers kill temporal aspects quicker and fallback to spatial(not completely, just faster). FSR1 with some Morphological AA tweaks can easily beat the spatial upscaling in DLSS.
The issue isn’t upcallers, the issue is people that think that they have to play at native 4K if they own a 4K display
Unfortunately half the pipeline abuses temporal upscaling and unneeded smearing tendencies in TAA. That’s what the votes are for(making it more dependent from this) since it’s an absolute lie that this is being done for performance “enhancements”.
#1 Most Voted Feedback(175 now!), with massively experienced names outside the forums supporting this call to action yet ZERO response in 5.5
UE5 has legitimate issues that should be addressed, from substandard TAA to removing options that encourage the use of less optimal tech, Tessellation for example.
At the same time, the quality of discourse here is outstanding. Throwing around insults, assuming malice, and OP dismissing arguments because they’ve got the most votes on a forum post, etc, is extremely counter productive and completely unprofessional.
This forum is for legitimate discussion around productivity tools. It’s not about world events or team sports. If the conversation can’t be kept rational then it’s no wonder Epic ignores it.
I watched your last video honestly you can’t say that this game is representative of every UE5 games.
Silent Hill 2 have many issues like crazy flickering everywhere, broken hardware lumen, locked 30 fps animations for many objects and cutscenes, the lighting is sooo flat.
Ultra bad optimisations the game run so bad 56 fps with RT off at 1440p on 7900XTX seriously. This is like a worst case scenario game.
Why wouldn’t you review a game making a good use of UE5 for a change ? like RoboCop: Rogue City, Senua’s Saga: Hellblade II, maybe Empire of the Ants (it looks good and have a free demo on Steam)
Lumen performance is too slow ? have you tried Megalights in UE 5.5 ?