NEW(3/22/2024) UE5.5+ feedback: Please Invest In Actual PERFORMANCE Innovations Beyond Frame Smearing For Actual GAMES.

Not just performance, but also bug fixes. Some of these bugs have been like forever, reminds me of unity. E.g. I just found that actor component tick have changed from UE4, nobody notices these kind of bugs, and nobody from epic technical team reads the forums much anymore, and it goes forever unfixed

3 Likes

Bugs are a sign of incoherent programming,

Incoherent programming equals inefficiency,

Inefficiency equals worse performance.

Not offend any Engine programmers, but we are human. As programmers, itā€™s a lot to keep up with. But we shouldnā€™t ignore the mistake we might have made and just march on to the next version.

The UE5 programmers are getting paid to add new stuff instead of polishing the current technologies. Whoevers is in charge of funding
needs to understand WE DONā€™T WANT NEW FEATURES.

Thereā€™s no point in developing them unless everything is fixed and optimized.
Itā€™s like moving into a house that isnā€™t is finished yet.

Iā€™m not blaming the Engine programmers, Iā€™m blame who ever is ignoring the serious performance and isnā€™t directing FNā€™s 6 billion dollar income towards performance innovations in unreals new features.

1 Like

BUMP.

This isnā€™t just about my game or just our projects.
This is about all games: LOOK BELOW.

If Fortnite is performing like thisā€¦What will happen to game performance when
the majority of studios decide to use UE5 for games NOTHING LIKE FORTNITE?

Our games, lead by cheap lazy developer and studios,
WITHOUT A DOUBT will perform like Remnant 2 regardless how low you set your settings.


We will be stuck with BLURRY, smearing games until we fix this.

This is the ugly and sad future of gamesā€¦ if this isnā€™t addressed in a completely new way:

I would like to share this Idea that could reverse the direction weā€™re going in:

(Excuse the rough draft on the first post, didnā€™t expect their forum to lock me out of editing :cold_face:, just got a little too passionate about the idea at the end there ā€¦:roll_eyes: )

This may seem like a fantasy panacea for our current situation in the gaming industry.
But we already have a step forward in this direction.

Take a look at RTX remix, a workflow enhanced by AI.
Think of an AI model to optimize meshes 10x more than Nanite ever could.

The optimized mesh will perform miles ahead of the Nanite mesh(100k or 2k tris),
not just because itā€™s lower poly.
But because High poly meshes destroy/slow shadow and lighting calculations.

Iā€™m not talking about AI that makes a magical mesh from thin air.
Iā€™m talking about an intelligent algorithm that takes a mesh you would regular hand off to Nanite.

The Ai Model scales it down itā€™s tris-count to the bare minimum while preserving detail via texture tricks, UV tiling optimization and AI enhanced textures no human could put together without HOURS of work that no one would realistically pay for.

This could solve or mitigate some of the major challenges of modern game development:

  • Performance problems
  • Vram problems
  • Games forcing blurry temporal upscalers to reach 60fps.
  • Finding someone to do the work by hand.
  • and SAVING TIME

A few years ago this would be crazy to suggest. But the fact is, Nvidia has shown us the power of AI models time and time again.
This is now possible.

THIS IS THE FUTUREā€¦ of game development.

AND SOMEONE needs to invest in it already.

1 Like

The AI hype ? Really ? That thing has been pushed after the crypto stuff has been closed down.
I say just this: do not trust the AI hype.
There is no Skynet. There is no AI.
LLM are not AI at all. Itā€™s just applied statistics algorithms that existed for decades already. The only difference is that in the past the hardware was too slow to fully utilize them but thatā€™s all. And all the LLM algorithms and neural networks have huge limits and flaws.
There is no magic wand.
The real hard work for programming anything still comes from the human brain and so it will be for a very long time.
Which is another good reason to have programmers of professional software like Unreal Engine or any other 3D Engine anyway to make robust, optimized, fully working code.
It is really useless to pay programmers to code tons of features in Alpha/Beta stage that are either unusable or barely usable. It is a waste of time and money. Even if the marketing departments love that the most it is a very bad strategy in the long run.

There is no magic wand.

This isnā€™t? But you know what? Everyone acts like Nanite is magic wand too and it performs worse.

The real hard work for programming anything still comes from the human brain and so it will be for a very long time.

Yet studios are still still being run by lazy developers pressed by deadline that kill modern game performance. Where is your human innovation?

This is isnā€™t a Magic.
Did you see RTX Remix?
Those AI textures(not models) were pretty darn magical to me.
But it wasnā€™t magic. It was trained by human innovation.
Human innovation that can spread into everyoneā€™s hands via AI.

Studio and gamers need this.
Or our games are to perform worse and worse on great hardware.

Or do you have a better idea to stop more games that look like Immortals of Aveum and
Remnant 2?
Because I find blurry, unperformant games unacceptable.

Hostetly the AI model is not complex in my opinion compared to a lot of other stuff.
Insta-LOD already has an algorithm.
But Iā€™m talking about an AI touch to optimize the textures via near impossible human workflow for optimizing tiling and fake parallax occlusion to base on the original mesh someone would and have handed off to Nanite/VSMs.|

EDIT: You didnā€™t even go to the links? I can tell because no one has clicked on them yet. Really rude dude, you didnā€™t read the concept and decided to trash me asap for no reason.

No one needs the AI hype. Really.
Why you took that personally I donā€™t know.
You want to trust the AI hype you are free too. Still it is just hype for marketing purposes.

1 Like

Still it is just hype for marketing purposes.

Nvidia makes more money off of AI than gamers.
Wouldnā€™t exactly call it marketing.

Again, go LOOK at the RTX remix.
Iā€™m not talking about DLSS or AI enhanced graphics. Iā€™m talking about a new workflow for all developers and all RT hardware like consoles.

Why you took that personally I donā€™t know.

Because I donā€™t appreciate comments that wonā€™t fix anything.
Iā€™m at least Iā€™m to figure out to massively change lazy studio workflow.
Nanite was marketing.

Where games are heading atm is blurry temporal nightmare with low performance.
We need better ideas somewhere, if the engine devs wonā€™t work on more performance, and if more studios canā€™t find a performant workflow, then we need to be the ones to innovate in new directions because weā€™re hitting a dead end.

Nanite is for virtual production. So the marketing was for the film industry lol.

If nanite mixed traditional optimization techniques, itā€™s possible we could see better performance. I remember when mesh shaders first came on the scene. I saw the potential then and still see it now. But it seems that many developers are struggling to implement proper DX12U support that doesnā€™t murder frame times.

1 Like

Nanite is for virtual production. So the marketing was for the film industry lol.

ā€œFor this demo, we used the cinematic qixel assets which would only be used in filmā€

A lot of these technologies are not entirely bad

And obviously it looks good in that demo. But the demo runs at a high resolution(not easily attainable for a lot of people ) and at 30fps. Same situation with the Matrix demo.

The biggest thing bringing those project to 30fps is the Nanite meshes hands down. The lack of hand made optimization. WE KNOW we can get an amazing looking 30fps game with UE5. And UE5 has gotten major performance improvement since.

But no common sense innovations have been set on the roadmap for UE5.

There are several reasons why a studio wouldnā€™t want there game running 30fps.

  • 30fps is less responsive and slide show-ish when many other titles offer 60fps.

  • Especially true if gameplay is dependent on input timing and combos. handā€“eye coordination and reaction time

  • The motion clearity with 30fps is absolutely HORRID. With content below 60fps, most screens will jutter the last 2 FRAMES on your current motion. Especially if itā€™s fast moving 30fps content. (So like an action game?..wait how many of those are there? Oh darn, a lot. )

I donā€™t want to hear this elitist crap like ā€œbuy a better(vastly more expensive) TV or monitor.ā€

ā€œIf you donā€™t like 30fps then upgrade($$$) youā€™re GPU.ā€

That is NOT innovation!?

Youā€™re telling me with a 6 billion dollar revenue from FN salesā€¦ you canā€™t pay for more programmers, computer graphic consultants, and veteran engine programmers?
Itā€™s like no UE5 engine programmer even reads SIGGRAPH papers and presentations!?

2.5k view and so far this is the 6th most voted topic with only 37 votes.
The Goal? To be the top feedback post.

I have spent my day testing Lumen, VSMs, and Nanites performance in the new 5.3 release.
The roadmap for UE5.3 had barely any mentions of targeting performance improvements.

r.Shadow.RadiusThreshold 0 doesnā€™t seem to force VSMā€™s to respect LODS like Nanite meshes.
EDIT(This command is in consistent)

Only VSMs and a newer(so possibly faster) version of C++ is now being used for UE5.

So far, in comparison with 5.2, we may have gotten a possible 10% performance increase. Maybe not even that. And Yes, I scaled down the bumped up settings in 5.3 to match 5.2
Test were done with the City sample with zero AI or gamelogic running to purely test GPU rasterization.

Lumen still highly relies on massive amounts of past frames to be smeared on top of your current frame. So motion/action is still take a blurry hit.

Nanite performance. Is it finally better?
Not sure if itā€™s sabotage but the mesh editor [and LODs systems FREEZES UE5.3]
(Unreal Engine 5.3 Released - #85 by _Solid_Snake)
So, now I canā€™t even test LODs verses Nanite performance anymore unless I jump through hoops on another LOD creator application then import and all this other crap.

Honestly, this is too much. Iā€™m super tired personally, just just physically but mentally tired.
TIRED of good ideas being slapped down by Epic and the UE5 programmers.

If such a massive/popular engine fails to focus on REAL performance vs upscaling with blurry temporal crap, then gamers with good hardware are screwed and people who like crisp games and action-motion are screwed.

If UE5.4 doesnā€™t focus on real innovations to fix development problems and doesnā€™t stop forcing people to use a 100% Nanite+Lumen+VSM workflow. Then the future of games is going to be a complete oxymoron.

1 Like

Been fighting UE5ā€™s performance issues the best I could and have determined that Iā€™m simply wasting my time. Visual quality can be upgraded if you downgrade the lumen defaults. (I know how that sounds)

By overriding the BaseScalability and adjusting the lumen defaults for each scalability preset, I was able to get some performance gains. I tested native performance on 3 different GPUā€™s. One of the GPUā€™s was tested in combination with an older CPU.

Main Development PC:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1440p

PC 01:
GPU: RTX 2080ti 11GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1080p

PC 02:
GPU: GTX 1080ti 11GB
CPU: Intel Core i7 7800X
RAM: 16GB
Monitor 1080p

GPUā€™s were not swapped. These are separate systems

I tested a packaged build on all of these machines at their monitors native resolutions. All builds ran the same scalability level which was EPIC.

Engine Version used for testing: 5.2
Iā€™m in the process of creating the test for 5.3.

Main Development PC: at native 1440p (no upscaling), I was able to get 54FPS. The average sat at around 55 fps.

PC 01: I was only able to get 40-45 FPS.

PC 02: I was only able to get 20-24 FPS.

This is after adjusting r.Nanite.MaxPixelsPerEdge to 2 instead of the default 1. The lumen settings have also been severely downgraded in the overridden DefaultScalability.ini. If I remove these optimized settings, I get a drop in performance.

A lumen setting that may or may not contribute to performance gains: r.Lumen.SampleFog=0

After ensuring that lumen isnā€™t killing the performance, I went on to optimize VSMā€™s using the following command(s):

r.Nanite.ProgrammableRaster.Shadows=0
; This command has some visual drawbacks that may or may not be noticeable.
r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5

This is the scene I used for benchmarking. Itā€™s heavy in nanite foliage:

3 Likes

On the main development PC can you show ā€œStat gpuā€ on your scene?
I want to see the biggest hit your perf.

Also post the debug views for each feature(Nanite, Lumen, VSMs).
Also we have met before on Reddit.
I applied some of youā€™re console command and found major drops in performance.
I forgot which ones tho.

Probably that I posted a version of scalability settings with Lumenā€™s radiance cache disabled. I did this for testing visual quality but am not sure how that impacted performance. I saw a performance decrease using Epicā€™s default settings for Lumen.

I enabled AsyncCompute for lumen reflections but Epic turned it off as they say that they see better performance with it off. So if you use the defaults in 5.3 for AsyncCompute, youā€™ll be fine.

These are Epicā€™s defaultsā€¦ I had the reflections set the 1 on my reddit post.

r.Lumen.DiffuseIndirect.AsyncCompute=1
r.Lumen.Reflections.AsyncCompute=0

And then in my DefaultEngine.ini, I disabled the radiance cache using:

r.Lumen.ScreenProbeGather.RadianceCache=False

But I would have to test the performance differences before I can say that this would have caused an issue.

Iā€™ll post some more details on current state of performance in my game soon. Iā€™m in the process of compiling the 5.3 source. I know this is going to require that I rework some of the settings thanks to the VSM updates.

2 Likes

Vote for LTS! Request a long term support version of UE

1 Like

Also post the debug views for each feature(Nanite, Lumen, VSMs).

System:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 2560x1440

All screenshots have Nanite enabled and are captured at native resolution.

I am now using UE 5.3. So the VSM cache page is not the same as in 5.2. The VSMā€™s in 5.2 were rendering blue in the debug view mode. I have since updated the scene so that I can make everything green. However, I still get the same performance.


Frame Time with Lumen + Nanite + VSM's + TSR @ 100% native resolution

Lumen Visualization

Nanite Overdraw Visualization

VSM CachePage Visualization

image


This is my performance with Lumen disabled (NOTE: TSR is also enabled again)


This is with VSM's & Lumen disabled (Which I know is the one of the biggest killers to performance):


With Lumen, VSM's and TSR disabled, this is the performance I get:


This is with Lumen, VSM's, and TSR disabled. Only, traditional shadow maps are being used in this screenshot:


This is the performance I get when not using TSR but everything else is on epic (TAA instead).


This is performance I get with TSR enabled and screen percentage at 50%


My biggest killer is happening on the GPU. TSR eats 2 milliseconds of time when the screen percentage is 100%ā€¦ So I disabled it and got over 60 FPS at native 1440p.

I have heavily modified lumenā€™s default settings in my DefaultScalability.ini and am running on Epic settings across the board.


In my DefaultEngine.ini, I set these values:

r.Nanite.ProgrammableRaster.Shadows=0

r.Lumen.TraceMeshSDFs=0
r.Lumen.SampleFog=0
r.Lumen.TranslucencyReflections.FrontLayer.EnableForProject=False

r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5

In my DefaultScalability.ini, I set these values:
[GlobalIlluminationQuality@2]
r.DistanceFieldAO=1
r.AOQuality=1
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=16
r.LumenScene.Radiosity.HemisphereProbeResolution=2
r.Lumen.TraceMeshSDFs.Allow=0
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=8
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=200
r.Lumen.ScreenProbeGather.DownsampleFactor=64
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=1
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=0
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=100

[GlobalIlluminationQuality@3]
r.DistanceFieldAO=1
r.AOQuality=2
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=8
r.LumenScene.Radiosity.HemisphereProbeResolution=3
r.Lumen.TraceMeshSDFs.Allow=1
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=16
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=300
r.Lumen.ScreenProbeGather.DownsampleFactor=32
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=0
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=1
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=200
1 Like

Really quick can you edit the post and use ā€œHide detailsā€ on pictures

Just highlight the pics, press the gear button, hide details and bam

Clean af.

You gotta stop using Nanite. Even if the meshes are not over 15k tris, It adds overhead.

Make LODs and keep quad overdraw from showing heavy green or above high lighting by keeping tri count lower in those areas. A sprinkle of green is fine but nothing dense.
How I like to do LODs is use wireframe view or quad overdraw, zoom out, when the mesh becomes too dense in color or to green in quad overdraw, make another LOD, repeat until the last LOD is 60 tris.

Also use the distance evaluate WPO.
This is how the Fortnite team did it. Even before Nanite.
Nanite has even worse performance with WPO.

Read the entire tree section.
https://www.unrealengine.com/en-US/tech-blog/bringing-nanite-to-fortnite-battle-royale-in-chapter-4

That is probably another thing killing your VSM perf.

Also stop using TSR, itā€™s not meant for native res/too expensive and it ignores foliage motion vectors.

Use Epic TAA with frameweight .1 from 3 with 2 to 4 samples.
You will get a VERY clean output with much cheaper ms timing.

Follow everything I just said. Youā€™re going to get +60fps at native 4k with the 3090.

1 Like

Iā€™ve been updating the post. I already included the performance differences with TSR disabled with Lumen + Nanite + VSMā€™s.

I saw the updates.
Stop using Nanite and TSR. Its killing youā€™re perf the most for no benefit.

Thanks for the Edit. Way cleaner now.

Thanks for the Edit. Way cleaner now.

No problem!


There is a visual benefit and performance benefit when things are weighed. Iā€™m not on the side of not using Nanite ever. Iā€™m on the side of not using Nanite in its current state (Which is for Virtual Production and high end hardware only)

When I moved this scene to 5.2 so that I could enable nanite, I was able to drastically increase the amount of onscreen foliage without taking a hit. Yes, there is a limit, but I did indeed see that there was better performance with more onscreen foliage in this area.

My issue is that Nanite has no real options for optimization. Itā€™s like Epic took Nanite and said, ā€œLetā€™s set it up for Consoles and High-end PCā€™s and forget about everything else; screw traditional optimization techniques; It just works!ā€. There isnā€™t a way to incorporate a hybrid LOD/Imposter + NANITE system out of the box without going the HLOD route or writing your own solution (Iā€™m currently investigating custom solutions to nanite optimization by amending the engine source.).
Side note: I forgot to mention that I did not setup HLODā€™s in my scene.

There is also no way to control WPO distances for the nanite programmable rasterizer. So you end up having to take a massive performance hit even though you canā€™t tell that this tree at a distance is blowing in the wind.

Nanite has huge potential. However, I agree that Epic needs to stop marketing Nanite for games and admit that itā€™s currently intended for virtual production.

So if you use nanite, you have to be prepared to have a stagnant framerate regardless of other settings. You have no ability to control how nanite performs, where you want to cut back, hybrid systems, etc.

Iā€™m not using cinematic quality assets either. I donā€™t believe in importing unoptimized game assets and relying on nanite to solve the issue which seems to be what Epic is promoting. All over the marketplace, if you find an asset that has nanite enabled, youā€™re going to see massive amounts of triangles on surfaces that only need 2.

1 Like

Nope. Nanite is best tech that i saw and i using in last decade :wink: But you have to think outside the box and learn a lot, not just swear :stuck_out_tongue: