Interesting analyse from Digital Foundry.
This is 5.0 vs 5.4 :
Unreal Engine 5.4: Big Performance Improvements, New Features, But What About #StutterStruggle? (youtube.com)
The Finals is on UE. You can check developers footage
I donât know why you linked that. I didnât say the Finals wasnât in UE, I said it wasnât in Epic Gameâs UE/ not using the broken technologies UE5 premotes.
No Nanite, No VSMs, No Lumen, which other UE5 games are being built on exclusively for basic visuals standards set during 8thgen.
Epics technologies(the main UE5 ones) are not fit for mass consumer hardware. They continue to WASTE money on these fundamentally anti-performance & anti-fidelity features insteads of the things weâve stated would help way more on this thread.
Epic has no obligation to make the engine better for the majority of environmental game designs(static environment with dynamic lights), FN is their 6 billion dollar baby and virtual production and unacceptable 30fps titles(on 9thgen) are their side quest.
What this entire thread shows is developers are ready to progress in a area where Epic wonât.
Interesting analyse from Digital Foundry.
They constantly give poor analysis on critical topics/aspects & spread poor info about rendering, temporal aspects, and assumptions about how things are done. There are a couple things in that video that got my dislike.
Thatâs a bold thing to say about Digital Foundry.
They sometimes make mistakes and bad assumptions (who doesnât ?). But most of the time they do catches some very interesting rendering things that others donât and have good performances analysis (which is the reason i posted this in the first place).
They give very good advice on the best visual and performance settings compromises for each game they review.
Youâve just lost a lot of credibility to me.
Moreover, you donât seems to understand that this is normal developement process to focus on adding the tools to facilitate the use of the engine and introduce new rendering technics without optimise it first because you have to make it mature enought before optimising it.
UE5 has probably divided by 2 to 10 the time it takes a solo or small team to make a game with good animations, good and fast assets placement (with PCG) and easy lighting set-up with lumen, a lot of things can be directly made inside the editor without having to rely on other external softwares compared to UE4. I donât see it mentioned anywhere in your post.
If you have to take twice as long to optimize your game, but you divide the time needed to do all the other things by 10, that seems to me to be a very good compromise to me.
But if you only care about performances and donât want cutting edge graphics, then you should just stick to UE4 IMO.
Iâm kind of âhappyâ with lumen and AA in open world situations right now (5.4) but not with Nanite yet. We finally have full Vulkan for Windows and Linux, which is great but the editor on Linux tend to love devouring performances (tested only on Nvidia) and after 4-5h, you have to close and re-open it.
Maybe they should just not put unpolished stuff in a âreleaseâ version.
I donât want to look grumpy lol but we have âexperimentalâ parts that might not stick and Beta in a âreleaseâ. Donât you think itâs weird ? If you did that with any other software, your customers would strangle you !
They give very good advice on the best visual and performance settings compromises for each game they review.
Youâve just lost a lot of credibility to me.
Yeah, and they lost credibilty to thousands of people who are several times more aware the abuse of TAA. Iâm producing a detailed video showing how ridiculous their analysis are to REAL professional perspectives. They are reporters big deal, not investigative reporters. They spread false narratives about game development and performance that allow studios to abuse consumers.
You have not been at the mercy of their derogatory insults and comments of the serious issues with modern game dev that thousands of people have been trying to bring awareness too. THEY have create a disgusting MESS I have to pick and clean now.
News flash: I have zero respect the entire GP industry including the main people running it. So if someone doesnât think Iâm creditable, Iâm not caring. Iâm in charge of my projects and games I play, and this thread has birthed the production of a UE version beyond what Epic is capable of seeing.
But if you only care about performances and donât want cutting edge graphics, then you should just stick to UE4 IMO.
Youâre telling me I have no credibility when you make a statement like that? UE5 is UE4 but with optimized backend code, Iâve even showed gained perf in âUE4â features in UE5 as versions pass.
UE5 has probably divided by 2 to 10 the time it takes a solo or small team to make a game with good animations, good and fast assets placement (with PCG) and easy lighting set-up with lumen, a lot of things can be directly made inside the editor without having to rely on other external softwares compared to UE4. I donât see it mentioned anywhere in your post.
I stated UE is ahead of all other open options, and this isnât whatâs hurting performance, itâs the fact that these systems require too much computation for visual standards achieved on 8thgen becuase FN canât allow for a design most third party UE games would benefit from.
Nanite, VSMs, Lumen, all abuse an overly expensive TAA to remain visually stable and performance is complete crap with no improvements that will help target hardware. These are terrible solutions to REAL problems.
Yes, Epic, please, slow it down, we need 2-3 quality-of-life updates as unreal 4 received in the past
Just made a git commit for Decima TAA Jitter Coordinates For UE5 Instead of Inefficient 2xMSAA
Approving this will enable plugin devs and future versions of UE5 to support lower temporal frame weighting(less blur and ghosting) without specular highlights and undersampled objects being neglected as they are now with r.TemporalAASamples 2
â2xMSAAâ.
Give some support if you care about motion clarity.
Lol nanite is what games needed. Ue5 is working towards the future. ⊠Hardware is getting exponentially more powerful every generation⊠Quit being stuck in the past. 5.4.2 I have lower frame times with optimized nanite tessellation on a 256km2 landscape with world partition⊠It isnât if âif I canât cap my refresh rate out on my 4k monitor with a 2080â thatâs unrealistic these days, well it always unrealistic but still. Idk why so many people take it upon themselves to trash these new technologies they are amazing. And biased tests not even catering to how nanite likes to run doesnât mean anything. Folks should start paying attention when they tell us how to get the better perf numbersâŠ
If your that mad about the new tech what not go back to ue4 and use the old pipelines man, why try to mess up a good thing for the rest of us. Every single engine version update we gain performance, this is facts. Itâs getting better and better always. These issues you are complaining about are EXTREMELY complex but they have some of the smartest people of our generation (imo) working on it and crushing the issues and pumping out actual optimizations with every update. You should give it a real try though. Once you figure out the sweet spot for mesh density and combating overdraw, whatever slight performance cost over old low poly lods is definately worth the sacrifice lol. But I somehow think your just on a hate train.
hardware is not gain exponentialy performance, not even close. if you consider dlss and other than you a very wrong, ask your self why we a chasing high frame rate in games, we a chasing them for response, not for visual fidelity, response on first place and there is hughe lack in dlss an others. and if you consider response on rpg or simmillar adventure games thats joke, response in fast shooter and driving games. also im talking here on indie and small game developers. our primary market is mid and low range hardware customers. those customers have maximum rtx3060, maximum, we must calculate performance on rx580 or 1070 basis. so when you look on situation from that perspective, and using ue5, we a stuck below 25fps and worst. i have 4090 but thats mean nothing for me, high end gpu owners mostly buys aaa games, thats not my market, my market is totaly different. so whatever you say considering high end gpu performance on ue5 is totaly irrelevant for this story!!!
Have bads news on performance. So Iâve been testing very recent versions of 5.5 and Im experiencing a 20% decrease of gpu performance in most scenarios. Really a large regression. We should try make this issue visible because it really would impact all projects negatively.
Thanks for the heads up, if Threat Interactive decides to test versions, a full profiling will be published.
Well apparently, the major difference from gpu performance in 5.5 comes from shadow depths and volumetric clouds.
I checked that the volumetrics cloud material is indeed heavier than the older one 234 instructions vs 205. But not sure about the shadow depths one but it shows a difference of 0.20 vs 0.03 and there about 0.25 ms extra in unaccounted.
Itâs bad faith to test an unannounced and not even in preview version of the engine. If you were to pull changes strategically from the 5.5/UE5-Main branch and place them into 5.4, this would yield better results but still may not be accurate.
Typically, there are commits that donât make it to the release branches from UE5-Main. So youâll certainly get a lot of bloat or experimental code with the UE5-Main branch.
Or MAYBE because that branch is in beta and thatâs why it hasnât been officially released yet? Just maybe.
My guess would be itâs done on purpose so people buy new hardware again and again. Planned obsolescence.
Itâs been half a month already. The 5.5 has been published on github for some days already. (thereâs about a 500 commit difference vs main). 5.5 is expected to be released at the end of the month to October.
Peformance hasnt changed at all, the difference is still massive(tested todays code). We are talking about 280fps vs 350 fps type of difference. Iâve tested hardware lumen, software lumen, no lumen no nanite, vulkan, sm5/sm6 and the performance difference is present across the board.
Sadly this is not due to some bug but its happening due to several changes that were made to the renderer so it shouldnât be expected for performance to be fixed in the following weeks.
Only some hours after I compiled, a push was made that further downgraded performance.
Here:
Set r.RayTracing.Shadows.AvoidSelfIntersectionTraceDistance 1 by default so that RT Shadows work with Nanite * 1.23ms tracing time â 1.79ms on 2080TI * Overall performance regression of 20% for RT Shadows, but needed for it to work [FYI] Tiago.Costa, Krzysztof.Narkowicz #jira UE-211955 #tests Firefly editor [CL 36243806 by daniel wright in 5.5 branch] Set r.RayTracing.Shadows.AvoidSelfIntersectionTraceDistance 1 by defa⊠· EpicGames/UnrealEngine@dfa6f4e (github.com)
After getting into the final stages of optimization for my game, I am happy to report that I was able to get a stable ~94-96FPS at native 1440p in the editor with TAA in 5.4. Before, I was only able to get about ~72FPS at native 1440p in editor with UE 5.3.
My scenes are extremely heavy with nanite masked foliage (trees, grass, and other plant life). Optimization with nanite was a feat but wasnât impossible. What this now shows me is that Nanite is more than capable of handling complexities IF you know how to use it properly. Nanite has so many cvars available that you really have to dive deep into figuring out what will and will not work.