Not just performance, but also bug fixes. Some of these bugs have been like forever, reminds me of unity. E.g. I just found that actor component tick have changed from UE4, nobody notices these kind of bugs, and nobody from epic technical team reads the forums much anymore, and it goes forever unfixed
Bugs are a sign of incoherent programming,
Incoherent programming equals inefficiency,
Inefficiency equals worse performance.
Not offend any Engine programmers, but we are human. As programmers, itās a lot to keep up with. But we shouldnāt ignore the mistake we might have made and just march on to the next version.
The UE5 programmers are getting paid to add new stuff instead of polishing the current technologies. Whoevers is in charge of funding
needs to understand WE DONāT WANT NEW FEATURES.
Thereās no point in developing them unless everything is fixed and optimized.
Itās like moving into a house that isnāt is finished yet.
Iām not blaming the Engine programmers, Iām blame who ever is ignoring the serious performance and isnāt directing FNās 6 billion dollar income towards performance innovations in unreals new features.
BUMP.
This isnāt just about my game or just our projects.
This is about all games: LOOK BELOW.
If Fortnite is performing like thisā¦What will happen to game performance when
the majority of studios decide to use UE5 for games NOTHING LIKE FORTNITE?
Our games, lead by cheap lazy developer and studios,
WITHOUT A DOUBT will perform like Remnant 2 regardless how low you set your settings.
We will be stuck with BLURRY, smearing games until we fix this.
This is the ugly and sad future of gamesā¦ if this isnāt addressed in a completely new way:
I would like to share this Idea that could reverse the direction weāre going in:
(Excuse the rough draft on the first post, didnāt expect their forum to lock me out of editing , just got a little too passionate about the idea at the end there ā¦ )
This may seem like a fantasy panacea for our current situation in the gaming industry.
But we already have a step forward in this direction.
Take a look at RTX remix, a workflow enhanced by AI.
Think of an AI model to optimize meshes 10x more than Nanite ever could.
The optimized mesh will perform miles ahead of the Nanite mesh(100k or 2k tris),
not just because itās lower poly. But because High poly meshes destroy/slow shadow and lighting calculations.
Iām not talking about AI that makes a magical mesh from thin air.
Iām talking about an intelligent algorithm that takes a mesh you would regular hand off to Nanite.
The Ai Model scales it down itās tris-count to the bare minimum while preserving detail via texture tricks, UV tiling optimization and AI enhanced textures no human could put together without HOURS of work that no one would realistically pay for.
This could solve or mitigate some of the major challenges of modern game development:
- Performance problems
- Vram problems
- Games forcing blurry temporal upscalers to reach 60fps.
- Finding someone to do the work by hand.
- and SAVING TIME
A few years ago this would be crazy to suggest. But the fact is, Nvidia has shown us the power of AI models time and time again.
This is now possible.
THIS IS THE FUTUREā¦ of game development.
AND SOMEONE needs to invest in it already.
The AI hype ? Really ? That thing has been pushed after the crypto stuff has been closed down.
I say just this: do not trust the AI hype.
There is no Skynet. There is no AI.
LLM are not AI at all. Itās just applied statistics algorithms that existed for decades already. The only difference is that in the past the hardware was too slow to fully utilize them but thatās all. And all the LLM algorithms and neural networks have huge limits and flaws.
There is no magic wand.
The real hard work for programming anything still comes from the human brain and so it will be for a very long time.
Which is another good reason to have programmers of professional software like Unreal Engine or any other 3D Engine anyway to make robust, optimized, fully working code.
It is really useless to pay programmers to code tons of features in Alpha/Beta stage that are either unusable or barely usable. It is a waste of time and money. Even if the marketing departments love that the most it is a very bad strategy in the long run.
There is no magic wand.
This isnāt? But you know what? Everyone acts like Nanite is magic wand too and it performs worse.
The real hard work for programming anything still comes from the human brain and so it will be for a very long time.
Yet studios are still still being run by lazy developers pressed by deadline that kill modern game performance. Where is your human innovation?
This is isnāt a Magic.
Did you see RTX Remix?
Those AI textures(not models) were pretty darn magical to me.
But it wasnāt magic. It was trained by human innovation.
Human innovation that can spread into everyoneās hands via AI.
Studio and gamers need this.
Or our games are to perform worse and worse on great hardware.
Or do you have a better idea to stop more games that look like Immortals of Aveum and
Remnant 2?
Because I find blurry, unperformant games unacceptable.
Hostetly the AI model is not complex in my opinion compared to a lot of other stuff.
Insta-LOD already has an algorithm.
But Iām talking about an AI touch to optimize the textures via near impossible human workflow for optimizing tiling and fake parallax occlusion to base on the original mesh someone would and have handed off to Nanite/VSMs.|
EDIT: You didnāt even go to the links? I can tell because no one has clicked on them yet. Really rude dude, you didnāt read the concept and decided to trash me asap for no reason.
No one needs the AI hype. Really.
Why you took that personally I donāt know.
You want to trust the AI hype you are free too. Still it is just hype for marketing purposes.
Still it is just hype for marketing purposes.
Nvidia makes more money off of AI than gamers.
Wouldnāt exactly call it marketing.
Again, go LOOK at the RTX remix.
Iām not talking about DLSS or AI enhanced graphics. Iām talking about a new workflow for all developers and all RT hardware like consoles.
Why you took that personally I donāt know.
Because I donāt appreciate comments that wonāt fix anything.
Iām at least Iām to figure out to massively change lazy studio workflow.
Nanite was marketing.
Where games are heading atm is blurry temporal nightmare with low performance.
We need better ideas somewhere, if the engine devs wonāt work on more performance, and if more studios canāt find a performant workflow, then we need to be the ones to innovate in new directions because weāre hitting a dead end.
Nanite is for virtual production. So the marketing was for the film industry lol.
If nanite mixed traditional optimization techniques, itās possible we could see better performance. I remember when mesh shaders first came on the scene. I saw the potential then and still see it now. But it seems that many developers are struggling to implement proper DX12U support that doesnāt murder frame times.
Nanite is for virtual production. So the marketing was for the film industry lol.
āFor this demo, we used the cinematic qixel assets which would only be used in filmā
A lot of these technologies are not entirely bad
And obviously it looks good in that demo. But the demo runs at a high resolution(not easily attainable for a lot of people ) and at 30fps. Same situation with the Matrix demo.
The biggest thing bringing those project to 30fps is the Nanite meshes hands down. The lack of hand made optimization. WE KNOW we can get an amazing looking 30fps game with UE5. And UE5 has gotten major performance improvement since.
But no common sense innovations have been set on the roadmap for UE5.
There are several reasons why a studio wouldnāt want there game running 30fps.
-
30fps is less responsive and slide show-ish when many other titles offer 60fps.
-
Especially true if gameplay is dependent on input timing and combos. handāeye coordination and reaction time
-
The motion clearity with 30fps is absolutely HORRID. With content below 60fps, most screens will jutter the last 2 FRAMES on your current motion. Especially if itās fast moving 30fps content. (So like an action game?..wait how many of those are there? Oh darn, a lot. )
I donāt want to hear this elitist crap like ābuy a better(vastly more expensive) TV or monitor.ā
āIf you donāt like 30fps then upgrade($$$) youāre GPU.ā
That is NOT innovation!?
Youāre telling me with a 6 billion dollar revenue from FN salesā¦ you canāt pay for more programmers, computer graphic consultants, and veteran engine programmers?
Itās like no UE5 engine programmer even reads SIGGRAPH papers and presentations!?
2.5k view and so far this is the 6th most voted topic with only 37 votes.
The Goal? To be the top feedback post.
I have spent my day testing Lumen, VSMs, and Nanites performance in the new 5.3 release.
The roadmap for UE5.3 had barely any mentions of targeting performance improvements.
r.Shadow.RadiusThreshold 0
doesnāt seem to force VSMās to respect LODS like Nanite meshes.
EDIT(This command is in consistent)
Only VSMs and a newer(so possibly faster) version of C++ is now being used for UE5.
So far, in comparison with 5.2, we may have gotten a possible 10% performance increase. Maybe not even that. And Yes, I scaled down the bumped up settings in 5.3 to match 5.2
Test were done with the City sample with zero AI or gamelogic running to purely test GPU rasterization.
Lumen still highly relies on massive amounts of past frames to be smeared on top of your current frame. So motion/action is still take a blurry hit.
Nanite performance. Is it finally better?
Not sure if itās sabotage but the mesh editor [and LODs systems FREEZES UE5.3]
(Unreal Engine 5.3 Released - #85 by _Solid_Snake)
So, now I canāt even test LODs verses Nanite performance anymore unless I jump through hoops on another LOD creator application then import and all this other crap.
Honestly, this is too much. Iām super tired personally, just just physically but mentally tired.
TIRED of good ideas being slapped down by Epic and the UE5 programmers.
If such a massive/popular engine fails to focus on REAL performance vs upscaling with blurry temporal crap, then gamers with good hardware are screwed and people who like crisp games and action-motion are screwed.
If UE5.4 doesnāt focus on real innovations to fix development problems and doesnāt stop forcing people to use a 100% Nanite+Lumen+VSM workflow. Then the future of games is going to be a complete oxymoron.
Been fighting UE5ās performance issues the best I could and have determined that Iām simply wasting my time. Visual quality can be upgraded if you downgrade the lumen defaults. (I know how that sounds)
By overriding the BaseScalability and adjusting the lumen defaults for each scalability preset, I was able to get some performance gains. I tested native performance on 3 different GPUās. One of the GPUās was tested in combination with an older CPU.
Main Development PC:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1440p
PC 01:
GPU: RTX 2080ti 11GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1080p
PC 02:
GPU: GTX 1080ti 11GB
CPU: Intel Core i7 7800X
RAM: 16GB
Monitor 1080p
GPUās were not swapped. These are separate systems
I tested a packaged build on all of these machines at their monitors native resolutions. All builds ran the same scalability level which was EPIC.
Engine Version used for testing: 5.2
Iām in the process of creating the test for 5.3.
Main Development PC: at native 1440p (no upscaling), I was able to get 54FPS. The average sat at around 55 fps.
PC 01: I was only able to get 40-45 FPS.
PC 02: I was only able to get 20-24 FPS.
This is after adjusting r.Nanite.MaxPixelsPerEdge to 2 instead of the default 1. The lumen settings have also been severely downgraded in the overridden DefaultScalability.ini. If I remove these optimized settings, I get a drop in performance.
A lumen setting that may or may not contribute to performance gains: r.Lumen.SampleFog=0
After ensuring that lumen isnāt killing the performance, I went on to optimize VSMās using the following command(s):
r.Nanite.ProgrammableRaster.Shadows=0
; This command has some visual drawbacks that may or may not be noticeable.
r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5
This is the scene I used for benchmarking. Itās heavy in nanite foliage:
On the main development PC can you show āStat gpuā on your scene?
I want to see the biggest hit your perf.
Also post the debug views for each feature(Nanite, Lumen, VSMs).
Also we have met before on Reddit.
I applied some of youāre console command and found major drops in performance.
I forgot which ones tho.
Probably that I posted a version of scalability settings with Lumenās radiance cache disabled. I did this for testing visual quality but am not sure how that impacted performance. I saw a performance decrease using Epicās default settings for Lumen.
I enabled AsyncCompute for lumen reflections but Epic turned it off as they say that they see better performance with it off. So if you use the defaults in 5.3 for AsyncCompute, youāll be fine.
These are Epicās defaultsā¦ I had the reflections set the 1 on my reddit post.
r.Lumen.DiffuseIndirect.AsyncCompute=1
r.Lumen.Reflections.AsyncCompute=0
And then in my DefaultEngine.ini, I disabled the radiance cache using:
r.Lumen.ScreenProbeGather.RadianceCache=False
But I would have to test the performance differences before I can say that this would have caused an issue.
Iāll post some more details on current state of performance in my game soon. Iām in the process of compiling the 5.3 source. I know this is going to require that I rework some of the settings thanks to the VSM updates.
Also post the debug views for each feature(Nanite, Lumen, VSMs).
System:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 2560x1440
All screenshots have Nanite enabled and are captured at native resolution.
I am now using UE 5.3. So the VSM cache page is not the same as in 5.2. The VSMās in 5.2 were rendering blue in the debug view mode. I have since updated the scene so that I can make everything green. However, I still get the same performance.
This is with VSM's & Lumen disabled (Which I know is the one of the biggest killers to performance):
This is with Lumen, VSM's, and TSR disabled. Only, traditional shadow maps are being used in this screenshot:
My biggest killer is happening on the GPU. TSR eats 2 milliseconds of time when the screen percentage is 100%ā¦ So I disabled it and got over 60 FPS at native 1440p.
I have heavily modified lumenās default settings in my DefaultScalability.ini and am running on Epic settings across the board.
In my DefaultEngine.ini, I set these values:
r.Nanite.ProgrammableRaster.Shadows=0
r.Lumen.TraceMeshSDFs=0
r.Lumen.SampleFog=0
r.Lumen.TranslucencyReflections.FrontLayer.EnableForProject=False
r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5
In my DefaultScalability.ini, I set these values:
[GlobalIlluminationQuality@2]
r.DistanceFieldAO=1
r.AOQuality=1
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=16
r.LumenScene.Radiosity.HemisphereProbeResolution=2
r.Lumen.TraceMeshSDFs.Allow=0
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=8
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=200
r.Lumen.ScreenProbeGather.DownsampleFactor=64
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=1
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=0
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=100
[GlobalIlluminationQuality@3]
r.DistanceFieldAO=1
r.AOQuality=2
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=8
r.LumenScene.Radiosity.HemisphereProbeResolution=3
r.Lumen.TraceMeshSDFs.Allow=1
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=16
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=300
r.Lumen.ScreenProbeGather.DownsampleFactor=32
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=0
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=1
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=200
Really quick can you edit the post and use āHide detailsā on pictures
Just highlight the pics, press the gear button, hide details and bam
Clean af.
You gotta stop using Nanite. Even if the meshes are not over 15k tris, It adds overhead.
Make LODs and keep quad overdraw from showing heavy green or above high lighting by keeping tri count lower in those areas. A sprinkle of green is fine but nothing dense.
How I like to do LODs is use wireframe view or quad overdraw, zoom out, when the mesh becomes too dense in color or to green in quad overdraw, make another LOD, repeat until the last LOD is 60 tris.
Also use the distance evaluate WPO.
This is how the Fortnite team did it. Even before Nanite.
Nanite has even worse performance with WPO.
Read the entire tree section.
https://www.unrealengine.com/en-US/tech-blog/bringing-nanite-to-fortnite-battle-royale-in-chapter-4
That is probably another thing killing your VSM perf.
Also stop using TSR, itās not meant for native res/too expensive and it ignores foliage motion vectors.
Use Epic TAA with frameweight .1 from 3 with 2 to 4 samples.
You will get a VERY clean output with much cheaper ms timing.
Follow everything I just said. Youāre going to get +60fps at native 4k with the 3090.
Iāve been updating the post. I already included the performance differences with TSR disabled with Lumen + Nanite + VSMās.
I saw the updates.
Stop using Nanite and TSR. Its killing youāre perf the most for no benefit.
Thanks for the Edit. Way cleaner now.
Thanks for the Edit. Way cleaner now.
No problem!
There is a visual benefit and performance benefit when things are weighed. Iām not on the side of not using Nanite ever. Iām on the side of not using Nanite in its current state (Which is for Virtual Production and high end hardware only)
When I moved this scene to 5.2 so that I could enable nanite, I was able to drastically increase the amount of onscreen foliage without taking a hit. Yes, there is a limit, but I did indeed see that there was better performance with more onscreen foliage in this area.
My issue is that Nanite has no real options for optimization. Itās like Epic took Nanite and said, āLetās set it up for Consoles and High-end PCās and forget about everything else; screw traditional optimization techniques; It just works!ā. There isnāt a way to incorporate a hybrid LOD/Imposter + NANITE system out of the box without going the HLOD route or writing your own solution (Iām currently investigating custom solutions to nanite optimization by amending the engine source.).
Side note: I forgot to mention that I did not setup HLODās in my scene.
There is also no way to control WPO distances for the nanite programmable rasterizer. So you end up having to take a massive performance hit even though you canāt tell that this tree at a distance is blowing in the wind.
Nanite has huge potential. However, I agree that Epic needs to stop marketing Nanite for games and admit that itās currently intended for virtual production.
So if you use nanite, you have to be prepared to have a stagnant framerate regardless of other settings. You have no ability to control how nanite performs, where you want to cut back, hybrid systems, etc.
Iām not using cinematic quality assets either. I donāt believe in importing unoptimized game assets and relying on nanite to solve the issue which seems to be what Epic is promoting. All over the marketplace, if you find an asset that has nanite enabled, youāre going to see massive amounts of triangles on surfaces that only need 2.
Nope. Nanite is best tech that i saw and i using in last decade But you have to think outside the box and learn a lot, not just swear