Not just performance, but also bug fixes. Some of these bugs have been like forever, reminds me of unity. E.g. I just found that actor component tick have changed from UE4, nobody notices these kind of bugs, and nobody from epic technical team reads the forums much anymore, and it goes forever unfixed
Bugs are a sign of incoherent programming,
Incoherent programming equals inefficiency,
Inefficiency equals worse performance.
Not offend any Engine programmers, but we are human. As programmers, itâs a lot to keep up with. But we shouldnât ignore the mistake we might have made and just march on to the next version.
The UE5 programmers are getting paid to add new stuff instead of polishing the current technologies. Whoevers is in charge of funding
needs to understand WE DONâT WANT NEW FEATURES.
Thereâs no point in developing them unless everything is fixed and optimized.
Itâs like moving into a house that isnât is finished yet.
Iâm not blaming the Engine programmers, Iâm blame who ever is ignoring the serious performance and isnât directing FNâs 6 billion dollar income towards performance innovations in unreals new features.
(post deleted by author)
The AI hype ? Really ? That thing has been pushed after the crypto stuff has been closed down.
I say just this: do not trust the AI hype.
There is no Skynet. There is no AI.
LLM are not AI at all. Itâs just applied statistics algorithms that existed for decades already. The only difference is that in the past the hardware was too slow to fully utilize them but thatâs all. And all the LLM algorithms and neural networks have huge limits and flaws.
There is no magic wand.
The real hard work for programming anything still comes from the human brain and so it will be for a very long time.
Which is another good reason to have programmers of professional software like Unreal Engine or any other 3D Engine anyway to make robust, optimized, fully working code.
It is really useless to pay programmers to code tons of features in Alpha/Beta stage that are either unusable or barely usable. It is a waste of time and money. Even if the marketing departments love that the most it is a very bad strategy in the long run.
There is no magic wand.
This isnât? But you know what? Everyone acts like Nanite is magic wand too and it performs worse.
The real hard work for programming anything still comes from the human brain and so it will be for a very long time.
Yet studios are still still being run by lazy developers pressed by deadline that kill modern game performance. Where is your human innovation?
This is isnât a Magic.
Did you see RTX Remix?
Those AI textures(not models) were pretty darn magical to me.
But it wasnât magic. It was trained by human innovation.
Human innovation that can spread into everyoneâs hands via AI.
Studio and gamers need this.
Or our games are to perform worse and worse on great hardware.
Or do you have a better idea to stop more games that look like Immortals of Aveum and
Remnant 2?
Because I find blurry, unperformant games unacceptable.
Hostetly the AI model is not complex in my opinion compared to a lot of other stuff.
Insta-LOD already has an algorithm.
But Iâm talking about an AI touch to optimize the textures via near impossible human workflow for optimizing tiling and fake parallax occlusion to base on the original mesh someone would and have handed off to Nanite/VSMs.|
EDIT: You didnât even go to the links? I can tell because no one has clicked on them yet. Really rude dude, you didnât read the concept and decided to trash me asap for no reason.
No one needs the AI hype. Really.
Why you took that personally I donât know.
You want to trust the AI hype you are free too. Still it is just hype for marketing purposes.
Still it is just hype for marketing purposes.
Nvidia makes more money off of AI than gamers.
Wouldnât exactly call it marketing.
Again, go LOOK at the RTX remix.
Iâm not talking about DLSS or AI enhanced graphics. Iâm talking about a new workflow for all developers and all RT hardware like consoles.
Why you took that personally I donât know.
Because I donât appreciate comments that wonât fix anything.
Iâm at least Iâm to figure out to massively change lazy studio workflow.
Nanite was marketing.
Where games are heading atm is blurry temporal nightmare with low performance.
We need better ideas somewhere, if the engine devs wonât work on more performance, and if more studios canât find a performant workflow, then we need to be the ones to innovate in new directions because weâre hitting a dead end.
Nanite is for virtual production. So the marketing was for the film industry lol.
If nanite mixed traditional optimization techniques, itâs possible we could see better performance. I remember when mesh shaders first came on the scene. I saw the potential then and still see it now. But it seems that many developers are struggling to implement proper DX12U support that doesnât murder frame times.
Nanite is for virtual production. So the marketing was for the film industry lol.
âFor this demo, we used the cinematic qixel assets which would only be used in filmâ
A lot of these technologies are not entirely bad
And obviously it looks good in that demo. But the demo runs at a high resolution(not easily attainable for a lot of people ) and at 30fps. Same situation with the Matrix demo.
The biggest thing bringing those project to 30fps is the Nanite meshes hands down. The lack of hand made optimization. WE KNOW we can get an amazing looking 30fps game with UE5. And UE5 has gotten major performance improvement since.
But no common sense innovations have been set on the roadmap for UE5.
There are several reasons why a studio wouldnât want there game running 30fps.
-
30fps is less responsive and slide show-ish when many other titles offer 60fps.
-
Especially true if gameplay is dependent on input timing and combos. handâeye coordination and reaction time
-
The motion clarity with 30fps is absolutely HORRID. With content below 60fps, most screens will jutter the last 2 FRAMES on your current motion. Especially if itâs fast moving 30fps content. (So like an action game?..wait how many of those are there? Oh darn, a lot. )
I donât want to hear this elitist crap like âbuy a better(vastly more expensive) TV or monitor.â
âIf you donât like 30fps then upgrade($$$) youâre GPU.â
Youâre telling me with a 6 billion dollar revenue from FN sales⌠you canât pay for more programmers, computer graphic consultants, and veteran engine programmers?
2.5k view and so far this is the 6th most voted topic with only 37 votes.
The Goal? To be the top feedback post.
I have spent my day testing Lumen, VSMs, and Nanites performance in the new 5.3 release.
The roadmap for UE5.3 had barely any mentions of targeting performance improvements.
r.Shadow.RadiusThreshold 0
doesnât seem to force VSMâs to respect LODS like Nanite meshes.
EDIT(This command is in consistent)
Only VSMs and a newer(so possibly faster) version of C++ is now being used for UE5.
So far, in comparison with 5.2, we may have gotten a possible 10% performance increase. Maybe not even that. And Yes, I scaled down the bumped up settings in 5.3 to match 5.2
Test were done with the City sample with zero AI or gamelogic running to purely test GPU rasterization.
Lumen still highly relies on massive amounts of past frames to be smeared on top of your current frame. So motion/action is still take a blurry hit.
Nanite performance. Is it finally better?
Not sure if itâs sabotage but the mesh editor [and LODs systems FREEZES UE5.3]
(Unreal Engine 5.3 Released - #85 by TheKJ)
So, now I canât even test LODs verses Nanite performance anymore unless I jump through hoops on another LOD creator application then import and all this other crap.
Honestly, this is too much. Iâm super tired personally, just just physically but mentally tired.
TIRED of good ideas being slapped down by Epic and the UE5 programmers.
If such a massive/popular engine fails to focus on REAL performance vs upscaling with blurry temporal crap, then gamers with good hardware are screwed and people who like crisp games and action-motion are screwed.
If UE5.4 doesnât focus on real innovations to fix development problems and doesnât stop forcing people to use a 100% Nanite+Lumen+VSM workflow. Then the future of games is going to be a complete oxymoron.
Been fighting UE5âs performance issues the best I could and have determined that Iâm simply wasting my time. Visual quality can be upgraded if you downgrade the lumen defaults. (I know how that sounds)
By overriding the BaseScalability and adjusting the lumen defaults for each scalability preset, I was able to get some performance gains. I tested native performance on 3 different GPUâs. One of the GPUâs was tested in combination with an older CPU.
Main Development PC:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1440p
PC 01:
GPU: RTX 2080ti 11GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 1080p
PC 02:
GPU: GTX 1080ti 11GB
CPU: Intel Core i7 7800X
RAM: 16GB
Monitor 1080p
GPUâs were not swapped. These are separate systems
I tested a packaged build on all of these machines at their monitors native resolutions. All builds ran the same scalability level which was EPIC.
Engine Version used for testing: 5.2
Iâm in the process of creating the test for 5.3.
Main Development PC: at native 1440p (no upscaling), I was able to get 54FPS. The average sat at around 55 fps.
PC 01: I was only able to get 40-45 FPS.
PC 02: I was only able to get 20-24 FPS.
This is after adjusting r.Nanite.MaxPixelsPerEdge to 2 instead of the default 1. The lumen settings have also been severely downgraded in the overridden DefaultScalability.ini. If I remove these optimized settings, I get a drop in performance.
A lumen setting that may or may not contribute to performance gains: r.Lumen.SampleFog=0
After ensuring that lumen isnât killing the performance, I went on to optimize VSMâs using the following command(s):
r.Nanite.ProgrammableRaster.Shadows=0
; This command has some visual drawbacks that may or may not be noticeable.
r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5
This is the scene I used for benchmarking. Itâs heavy in nanite foliage:
On the main development PC can you show âStat gpuâ on your scene?
I want to see the biggest hit your perf.
Also post the debug views for each feature(Nanite, Lumen, VSMs).
Also we have met before on Reddit.
I applied some of youâre console command and found major drops in performance.
I forgot which ones tho.
Probably that I posted a version of scalability settings with Lumenâs radiance cache disabled. I did this for testing visual quality but am not sure how that impacted performance. I saw a performance decrease using Epicâs default settings for Lumen.
I enabled AsyncCompute for lumen reflections but Epic turned it off as they say that they see better performance with it off. So if you use the defaults in 5.3 for AsyncCompute, youâll be fine.
These are Epicâs defaults⌠I had the reflections set the 1 on my reddit post.
r.Lumen.DiffuseIndirect.AsyncCompute=1
r.Lumen.Reflections.AsyncCompute=0
And then in my DefaultEngine.ini, I disabled the radiance cache using:
r.Lumen.ScreenProbeGather.RadianceCache=False
But I would have to test the performance differences before I can say that this would have caused an issue.
Iâll post some more details on current state of performance in my game soon. Iâm in the process of compiling the 5.3 source. I know this is going to require that I rework some of the settings thanks to the VSM updates.
Also post the debug views for each feature(Nanite, Lumen, VSMs).
System:
GPU: RTX 3090ti 24GB
CPU: Ryzen 9 5950X
RAM: 64GB
Monitor: 2560x1440
All screenshots have Nanite enabled and are captured at native resolution.
I am now using UE 5.3. So the VSM cache page is not the same as in 5.2. The VSMâs in 5.2 were rendering blue in the debug view mode. I have since updated the scene so that I can make everything green. However, I still get the same performance.
This is with VSM's & Lumen disabled (Which I know is the one of the biggest killers to performance):
This is with Lumen, VSM's, and TSR disabled. Only, traditional shadow maps are being used in this screenshot:
My biggest killer is happening on the GPU. TSR eats 2 milliseconds of time when the screen percentage is 100%⌠So I disabled it and got over 60 FPS at native 1440p.
I have heavily modified lumenâs default settings in my DefaultScalability.ini and am running on Epic settings across the board.
In my DefaultEngine.ini, I set these values:
r.Nanite.ProgrammableRaster.Shadows=0
r.Lumen.TraceMeshSDFs=0
r.Lumen.SampleFog=0
r.Lumen.TranslucencyReflections.FrontLayer.EnableForProject=False
r.Shadow.Virtual.ResolutionLodBiasDirectional=1.5
In my DefaultScalability.ini, I set these values:
[GlobalIlluminationQuality@2]
r.DistanceFieldAO=1
r.AOQuality=1
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=16
r.LumenScene.Radiosity.HemisphereProbeResolution=2
r.Lumen.TraceMeshSDFs.Allow=0
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=8
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=200
r.Lumen.ScreenProbeGather.DownsampleFactor=64
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=1
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=0
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=100
[GlobalIlluminationQuality@3]
r.DistanceFieldAO=1
r.AOQuality=2
r.Lumen.DiffuseIndirect.Allow=1
r.LumenScene.Radiosity.ProbeSpacing=8
r.LumenScene.Radiosity.HemisphereProbeResolution=3
r.Lumen.TraceMeshSDFs.Allow=1
r.Lumen.ScreenProbeGather.RadianceCache.ProbeResolution=16
r.Lumen.ScreenProbeGather.RadianceCache.NumProbesToTraceBudget=300
r.Lumen.ScreenProbeGather.DownsampleFactor=32
r.Lumen.ScreenProbeGather.TracingOctahedronResolution=8
r.Lumen.ScreenProbeGather.IrradianceFormat=1
r.Lumen.ScreenProbeGather.StochasticInterpolation=0
r.Lumen.ScreenProbeGather.FullResolutionJitterWidth=0
r.Lumen.ScreenProbeGather.TwoSidedFoliageBackfaceDiffuse=1
r.Lumen.ScreenProbeGather.ScreenTraces.HZBTraversal.FullResDepth=0
r.Lumen.TranslucencyVolume.GridPixelSize=64
r.Lumen.TranslucencyVolume.TraceFromVolume=0
r.Lumen.TranslucencyVolume.TracingOctahedronResolution=2
r.Lumen.TranslucencyVolume.RadianceCache.ProbeResolution=8
r.Lumen.TranslucencyVolume.RadianceCache.NumProbesToTraceBudget=200
Really quick can you edit the post and use âHide detailsâ on pictures
Just highlight the pics, press the gear button, hide details and bam
Clean af.
You gotta stop using Nanite. Even if the meshes are not over 15k tris, It adds overhead.
Make LODs and keep quad overdraw from showing heavy green or above high lighting by keeping tri count lower in those areas. A sprinkle of green is fine but nothing dense.
How I like to do LODs is use wireframe view or quad overdraw, zoom out, when the mesh becomes too dense in color or to green in quad overdraw, make another LOD, repeat until the last LOD is 60 tris.
Also use the distance evaluate WPO.
This is how the Fortnite team did it. Even before Nanite.
Nanite has even worse performance with WPO.
Read the entire tree section.
https://www.unrealengine.com/en-US/tech-blog/bringing-nanite-to-fortnite-battle-royale-in-chapter-4
That is probably another thing killing your VSM perf.
Also stop using TSR, itâs not meant for native res/too expensive and it ignores foliage motion vectors.
Use Epic TAA with frameweight .1 from 3 with 2 to 4 samples.
You will get a VERY clean output with much cheaper ms timing.
Follow everything I just said. Youâre going to get +60fps at native 4k with the 3090.
Iâve been updating the post. I already included the performance differences with TSR disabled with Lumen + Nanite + VSMâs.
I saw the updates.
Stop using Nanite and TSR. Its killing youâre perf the most for no benefit.
Thanks for the Edit. Way cleaner now.
Thanks for the Edit. Way cleaner now.
No problem!
There is a visual benefit and performance benefit when things are weighed. Iâm not on the side of not using Nanite ever. Iâm on the side of not using Nanite in its current state (Which is for Virtual Production and high end hardware only)
When I moved this scene to 5.2 so that I could enable nanite, I was able to drastically increase the amount of onscreen foliage without taking a hit. Yes, there is a limit, but I did indeed see that there was better performance with more onscreen foliage in this area.
My issue is that Nanite has no real options for optimization. Itâs like Epic took Nanite and said, âLetâs set it up for Consoles and High-end PCâs and forget about everything else; screw traditional optimization techniques; It just works!â. There isnât a way to incorporate a hybrid LOD/Imposter + NANITE system out of the box without going the HLOD route or writing your own solution (Iâm currently investigating custom solutions to nanite optimization by amending the engine source.).
Side note: I forgot to mention that I did not setup HLODâs in my scene.
There is also no way to control WPO distances for the nanite programmable rasterizer. So you end up having to take a massive performance hit even though you canât tell that this tree at a distance is blowing in the wind.
Nanite has huge potential. However, I agree that Epic needs to stop marketing Nanite for games and admit that itâs currently intended for virtual production.
So if you use nanite, you have to be prepared to have a stagnant framerate regardless of other settings. You have no ability to control how nanite performs, where you want to cut back, hybrid systems, etc.
Iâm not using cinematic quality assets either. I donât believe in importing unoptimized game assets and relying on nanite to solve the issue which seems to be what Epic is promoting. All over the marketplace, if you find an asset that has nanite enabled, youâre going to see massive amounts of triangles on surfaces that only need 2.
Nope. Nanite is best tech that i saw and i using in last decade But you have to think outside the box and learn a lot, not just swear