Released UE5 games-The 60fps performance monitoring of games using UE5.

I would like the thank @jblackwell for the idea of this thread.

The point of this thread is to show how reliable or unreliable UE5’s performance can be with major games. We show the current performance and settings of each game and or project released by Epic Games.

This thread can show if UE5 games can meet the current gen rasterization to resolution ratio. This also gives the UE5 programmers see where the engines weaknesses are.

  • This means NO upscaling should be including in any post. That is not real optimization.. Remember not to use TSR as a AA method because the creator clearly stated its an upscaling method and it can tank performance unreasony at native resolutions.

  • And since 60fps is the gold standard, make sure the game isn’t using Epic Lumen as that targets 30fps. High Lumen and reflections is the the 60fps version of Lumen. Nothing we can do about that.

  • Make sure to include the GPU and resolution in a medium to complex scene in the game. Maybe even the settings that you needed to use to achieve 60fps at native resolution(if you even can)

  • Include rendering artifacts, since Epics Games and plenty developers apparently call scaling down features as “optimizing.” With lots of these features, scaling down to much ends up causing severe artifacts. If they have artifacts, they will probably force TAA cancer on players which is unacceptable.

Using Universal Unreal Engine 5 Unlocker is welcomed in showing what could have been better performance and make sure the resolution and screen percentages are correct. For instance, some recent games have set sg.ResolutionQuality below 100 without letting players know to change it back to full resolution/100(to hide bad performance).

I’ll start:
Fortnite 5.1
This game and high fidelity mode using Nanite, Lumen and VSM’s was produced by UE5 engine programmers and the performance is concerning for a “simple stylized game”

On a Desktop 3060, at Native 1080p, with Motion blur on and these settings.

Here is the FPS 66fps on a large angle, so if you play on the ground, better perf…right?

Oh except you don’t really get a performance boost on the ground.

Absolutely horrible performance for such a simple project. Same Perf as City Sample.
I made sure not to include much sky when monitoring performance since rendering the sky is pretty cheap and uncommon.

(EDIT)
Most will say, oh well nothing is static or baked! Okay so what? Then it should be more aggressively caching the buildings/stationary objects not being destroyed.

If someone wants to make the argument that a static environment would perform better than why does City Sample, at the same settings(Via Cvar sync) with NO ai, and no WPO perform the same?

2 Likes

City Sample, compiled on 5.1(5.2 has had performance increases)

You can only achieve 60fps, with no post process like motion blur, and barely any bloom.
High Lumen, Medium shadows, FXAA, low shading quality.

Native 1080p, on a Desktop 3060: Produced by Epic Games.

Note, this is using HWRT lumen. Has a lot of flickering issues and requires a temporal filter to fix the problems. 1080p, A temporal filter like TAA and dlss is unacceptable. Even with DLSS 3.5.

In any camera motion, frames drop down to 49fps. Then 62fps if you stay still…great.

EDIT: CPU was always at 50% usage. This was a graphical bottleneck. And I also did not compile this project myself.

1 Like

I appreciate you citing me for inspiration to the post, and I do believe that a good understanding of the performance of UE5 games matter, but I feel the need to say my piece at this point. This is just my opinion, not a declaration of right or wrong, or an accusation against anybody. The beautiful purpose of forums like these is that we can discuss differing viewpoints to come to a more correct conclusion as a community.

In my opinion, and in my understanding of the games industry as a whole, I think upscaling should be seen as a legitimate tool to improve performance, like any other optimization tool. I don’t believe it should either be damned as a terrible feature or praised as the savior of modern games, it’s just another lever.

First off, I say this because, at the end of a day, what a game should look like and how it should run is a choice by the developer. If you have engine coders, then you can tinker and tune the engine, or even get a custom solution. Digital Foundry makes a good point of this, but plenty of last-gen titles used upscaling optimizations of one sort or another. Epic’s DFAO runs at a comically low-resolution to keep performance up, and even Remedy studios’ Northlight engine used lower-than-screen-res reflection buffers; they had the benefit of the engine being tuned perfectly to whatever game they were making. There is no such thing as correct performance, only the performance you can get. Hacks and cheats make the (game) world go round.

Second off, the very architecture of these next-gen technologies is built around upsampling of one form or another. Lumen’s screen probes are tracing rays from an extremely downsampled GBuffer, as illustrated by the lumen team’s slide:

If you go off of the team’s figures of a 1080p internal resolution, then that mean’s the lumen radiance buffer is only 67 by 120 (in a very specific sense of things). Getting that up to 1080p is all about smart spacial and temporal filtering, and then you’re using TSR again to get that to 4K. Most of the test numbers I see from the lumen team are taken at native 1080p, because it doesn’t seem they ever expected to reach 4k natively with lumen.

Bottom line: I believe upsampling is essential to games, and it’s as legitimate a tool as anything else a dev can use. It is the developer’s job to ship their game with the technology on hand. Either they can, or they cannot, but that is alas the nature of the beast. That being said, there are plenty of legitimate grievances with badly-optimized titles that use upsampling (badly), but that should be seen (IMHO) as the fault of the developer, and not of the technology itself.
Games ‘shouldn’t’ perform any given way. It’s the developer’s job to make it perform as best as they can.

I know this was a lot of text, and I want to be completely clear that this isn’t a barb against anyone. We are all here to make good games and media, and we are dealing with one of the most complex pieces of tech that’s ever been. We’re all here to do our best work, and ideally we can do so with eachother’s help. I’m happy we can have these discussions here.

6 Likes

My reply to you’re entire post–

I think I failed to clarify what I consider “upscaling.”
I’m talking about DLSS, FSR, TSR, even TAAU to get a game running “so called 1080p” at 60fps on a current gen 1080p GPU such as the 3060 or A770.

I think it’s pretty apparent upscaling certain pipelines, such as reflections, AO, or bloom can be a great way to keep visual fidelity and performance.
But scaling down the actual amount of pixels is not. There is a massive slippery slope when you begin to develop a high fidelity game and then lower the entire resolution.

The graphics becomes an oxymoron.

Many developers are using specifically output resolution upscalers like DLSS and to ship 720p games to run 60fps on a 3060. They use DLSS or FSR to call it 1080 when it’s really 720p. It should run 1080p 60fps on a 3060. Because real 1080p(not ruined by a temporal aa method) isn’t like pouring acid into my eyes like 720p pretending to be 1080p. Even if that 720 is path traced goodness.

The biggest problem is most developers are using high end 4k cards or 1440p card because they can process the game making faster. But then they test the game at 4k and use DLSS or TSR to reach 60fps at fake 4k. They will upscale from a better, more fit resolution like 1080p or 1440p to upscale to a 4k ish image.
Which in fact, may not look like blurry hell.

Most people can only afford 1080p gaming which is around $350-300. This is why respecting native resolution performance on you’re development card is so important.
This is the reason I will still have my 3060 when I have a 4090.

Downscale features if it doesn’t have massive artfacts, whoopty do-ha. But leave my output resolution alone. And don’t ruin my 1080p with forced TAA. The whole point of PC gaming is to have the choice and we have hundreds of games with forced TAA and now FORCED output resolution upscaling.

As for consoles like the PS5 or Series X, I already stated this in another post. Sony and Microsoft advertised 4k gaming when both the PS5 and Series X are nowhere near 4k gaming cards(in terms of the current gen rasterization to resolution ratio). So studios have to use output resolution upscaling methods.

I know this was a lot of text

It’s a forum, all opinions must be public for sake of progression. I’m here to read anything worth typing down and build on that.

EDIT: Btw, when I say 1080p for a 3060, you can replace the GPU with a 3080, the game should ship at Native 4k. 1440p for RX 6750. These cards perform the same if they are playing at their targeted resolution. Just in case anyone missed the Current Gen Rasterization to resolution scale in the first post.

1 Like

Just as a general hardware thing - 1440p and especially 4k monitors are a downgrade to most systems. :expressionless:

There’s still barely any cards that push non-trivial graphic games at those native resolutions without upscaling.

Outdated test.

I haven’t been able to test Remnant 2 and Immortals of Aveum.
They are not worth paying for so I trying to find someone else to do the test.

If you research the performance of those two games yourself, you’ll see that those fail HARD on reaching the 60fps resolution to hardware ratio. They both don’t look good enough to even justice their performance.

Consumers are beginning to HATE unreal. It’s at the point where studio might become embarrassed to announce they are using this engine.

In case a moderator comes, this is an entirely different situation thanks to Fortnites’ performance. A stylized game that can only reach 60fps on the current gen resolution to rasterization scale.

Just tried out The First Descendant via crossplay test.
The application would not let me take screenshots for some reason, which really is very unfortunate due to how blurry this game was.

Outdated test

First off, seems like they tweaked the engine. (By the end of this, you can just if the tweaks were good)

Test were done at “1080p” on a RTX 3060.

I could only test in large interiors atm.
At max settings, I rarely dropped below 66fps with 100% usage of the desktop 3060.

  • But I wasn’t able to confirm this was real 1080p due to the fact that TAA or upscalers like DLSS were forced on.

  • Could not use UUE5 unlocker on it to confirm the 1080p internal resolution nor did the application listen to ini edits.

The game runs fine, but it looks like blurry, smeary crap.

From the appearance of the trailers, it’s only meant to look good at 4k.
I’m not talking playing a 3080, a 3080 will only get you 60fps at 4k.
The UE TAA or TSR are also dependent on the amount of frames per second.

Which, again. That is stupid and insulting to the average wallet since playing at 4k is extremely hard even with $800+ computer budget.
This game has the same oxymoron of games mentality once again.

EDIT: After some edits to the code. Here is the performance at confirmed native 1080p on a desktop 3060.

  • This game DOES NOT use Nanite. The meshes are optimized LODs.
    Smart move on the dev team. At first, the developer team stated they wanted to use Nanite. But in the end, they didn’t.

  • It does use Lumen even on “low GI” but as the Nanite test showed, performance would be manageable without Nanite’s overhead.

  • It does use VSM’s, confirmed via the SMRT artifacts.

On max settings in the large open player hub. It average FPS is around 51-47. That’s pretty good for placebo “ultra” settings.

What happens when you optimize?
Take down just these there settings.

  • Post Processing

  • GI

  • Reflections

From ultra to High. Getting an average of 60-68fps at native 1080p on a desktop 3060.
Well done Nexon. Well done indeed for delivering great graphics meeting the current gen resolution to rasterization ratio.

This is what happens when you don’t use Nanite. No cartoony nanite meshes performing at 60fps but with optimized environment meshes.

This is a good testament to UE5.

1 Like

For anyone wondering:

Try out lumen reflections with these commands, should make them more stable without TAA and give more performance.
r.Lumen.Reflections.DownsampleFactor=4
r.Lumen.Reflections.MaxRoughnessToTrace=0.31
r.Lumen.Reflections.RoughnessFadeLength=0.019
r.Lumen.Reflections.SmoothBias=1.0
r.Lumen.Reflections.ScreenSpaceReconstruction.KernelRadius=0.14
r.Lumen.Reflections.BilateralFilter=0

What this does is centralize reflections where they are needed, lower the reflection resolution, and then reconstruct a shinier/more clear output.
You might need to rid TAA and jittering upscalers like DLSS or these reflections will wobble.
Was helping a friend get better/less noisy reflections in a game.
Hopefully I can do a piece on that games performance here soon.


Update: This caused artifacting for some people, so of course use common sense and see if it works for your specific project/scene.

4 Likes

Tekken 8 Demo performance review.

Disturbing representation of native performace and Forced TAA/Upscalers

One of the most disturbing parts of this game’s production style is the fact that they labeled Native resolution as “Ultra” (Found via save game as sg.ResolutionQuality=100).
And they also force TAA and Upscaling.

  • When testing the performance. I used r.ScreenPercentage 100 with UUE5 unlocked to make sure it was true 1080p

Because they force upscaling and temporal AA/output resolution upscalers. No option for FXAA. The gameplay looks terrible in real life as I suspected @Guillaume.Abadie . Why would the developers offer FXAA when your AA documentation describes it as a method that hurts image fidelity even though the temporal methods ruin motion. I have no doubt if CMAA2 or SMAA were offered in the engine, we would not have received such a poor presentation of this franchise.

I was forced to use UUE5 unlocker to get rid of the TAA (r.AntiAliasingMethod 1) and my god… this game looked fantastic in motion with no temporal crap ruining it.

Why did they force TAA?

  • To fix TAA dependent hair and objects that use ugly temporally dithering.

How does it perform? 55-65% GPU usage on a desktop 3060 on max settings, TAAU. Very good performance.

Why does it perform so well?

  • Well, for one reason it doesn’t use Nanite or Lumen like this trailer did.
    These features were confirmed non-existent in the game by the devs.

  • Nor does it use Virtual Shadow Maps due to the fact that typing in r.Shadow.Virtual.Enable 1 changed the way some objects were lit (they became brighter) and tbh, it really didn’t make a difference in visuals. I and I think I saw a 10fps decrease in perf when I forced VSMs on with UUE5 unlocker ( It was still 60fps at native 1080p on a 20% slower mobile 3060).

More test to come. I made sure to test perf in the largest environment on the correct resolution to current gen rasterization ratio scale.

And no btw, I could not force Lumen or Nanite on. They didn’t build the 60fps targeted project with Lumen or Nanite in mind even though they had the Reveal Trailer running at 60fps on a PS5 with good “4k” visuals.

This isn’t the complete game, but this AAA studio is currently unhappy with the visuals.

Let the results speak for themselves.

I don’t think you understand the whole point of performance nor how complicated these games actually are.

Your Fortnite test. How do you not see all the elements of visuals in front of you!!! Plus, the whole point of performance is to make the game look good. Do you like Fortnite? I don’t care, but that is what matters. What sells is visuals, not numbers. The city test too. There is so much going on there that you don’t see, like Metahumans. Reminds me of your stupid thread on how “Nanite does not help performance. Epic is ruining games” yada yada yada.

You should really put that middle finger is Reddit’s hole, not Epic’s.

I don’t think you really understand the concept of Nanite bro. Every building you see in Fortnite is modeled by hand, it contains no normal maps, displacement maps. That’s why Nanite is present, instead of Lods and popups, you got a smooth transition under the hood.
UE5.4 introduced some majour performance upgrades and optimisations. for example, the matrix city demo ran about 40-50 fps on 4k, now it runs up to 80fps.

the matrix city demo ran about 40-50 fps on 4k, now it runs up to 80fps.

I have tested 5.4 with public results, I’ll believe that when I see it. Where these test done with upscaling? Because that matters since the cost of upscales has been a major fluctuation.

Nor or those results happening on affordable hardware($300 GPU/9th gen consoles 85% faster than 8thgen where near realistic visuals)

Where and what are you basing this off? There is more to “performance” in the consumer sense then the GP. everything matters. From the OS down to The PCIE version the card is running at. CPU’s do matter. A “3060” will never reach 16.6 MS with a Pentium 2, in any 3d title from the last 7 -10 years. (i am sure there is an exception but you get the point)

60 fps has never been the “Gold Standard” 30 htz or 33.3 ms has been a baseline for decades. Even modern day home gaming consoles have titles that run at 30 htz. I am sure many of your favorite games are and were developed for 30 frames per second. 60 FPS is nice, so is 120, so is 500.

After reading this thread it also seems like you have strong opinions on TAA/ DLSS/ FSR/ TSR/ XESS. or really any temporal solution. So much so as to call is a Cancer! But is it? From Wikipedia,

" Temporal anti-aliasing (TAA) is a spatial anti-aliasing technique for computer-generated video that combines information from past frames and the current frame to remove jaggies in the current frame. In TAA, each pixel is sampled once per frame but in each frame the sample is at a different location within the pixel. Pixels sampled in past frames are blended with pixels sampled in the current frame to produce an anti-aliased image. Although this method makes TAA achieve a result comparable to supersampling, the technique inevitably causes ghosting and blurriness to the image"

Ghosting (the downside) is bad. No disagreement there. but in this image comparing No AA to SSAA, i think its pretty clear that any solution that is comparable to SSAA, is the best choice. Other solutions are older, and less efficient

From Wiki: (AGAIN)
“TAA effectively computes MSAA over multiple frames and achieves the same quality as MSAA with lower computational cost”

This is why MSAA has lost favor in real time rendering in some products. its more expensive. Any YouTube video explaining what the Nvida control pannel MSAA options did would have giving you this information.

And FXAA or Fast Sample Anti-aliasing is a primitive solution and i would recommend to no one. It was developed by Nvidia a long long time ago.

Then there is DLAA (an Nvidia only solution for now) WKI:
“DLAA handles anti-aliasing with a focus on visual quality. DLAA runs at the given screen resolution with no upscaling or downscaling functionality”

DLAA is also “trained to identify and fix temporal artifacts, instead of manually programmed heuristics” which Wikipedia states is similar to a blur filter.

But enough about anti-aliasing, what about upscaling? Modern effects are expensive, consumers demand higher quality visuals in each product iteration. In turn requiring these same consumers to acquire more powerful hardware over time. The cycle continues over and over leading to today. However, hardware has a limit, we are not there quite yet. But we are getting closer, computer microchip power has increased dramatically over the past 10 years. (in both server and consumer applications and deployments). Upscalling, if done well, offers a way to extend this process (why do you think NVIDA and AMD and Intel has promoted this as much as they have. 4K rendering is hard, 1440 is becoming the new baseline, and 1080P is going the way of 766p and 720p. People will just stop buying 1080P displays, and the manufactures will stop making them.

Engines need to be forward thinking, and account for the technology of their time. There will come a day when you can play (insert your game here!) at 4k 300 FPS. It happened to Quake! (that game simultaneously goes at 500 FPS and 19 with RTX ON! :joy:)

Nanite is one of these advance new technologies I was describing! its cool its new, it means i don’t need to make 5 or 10 LOD’s per mesh!. Same with Lumen, now I don’t need to worry about Static Meshes and the number of realtime lights in an area. Or for that matter baking the light maps for hours! Yes all this comes at a small performance hit, but I saved 5 or 15 hours!!! And the customer, well they will upgrade if they want to.

If unreal doesn’t suit your needs then use another engine, Make your own! Bungie made their own engine, Dice, same thing. There are other game engines, Unity, Cryengine, Unigine, Godot! the project tells you what engine you should use. The engine should never hamper the creative freedom of the project.

Apologies if i jump around, But this is a long post!

I want to jump back to some of your initial screenshots. The fortnight ones are meaningless (lets not even get into the settings you chose :joy: medium textures on a 12 GB 3060) it is two different parts of the map looking in two different directions, with different assets on screen. A better comparison would be a averaged benchmark over multiple runs over multiple hardware configurations. Digital Foundry, or any of those other types are the benchmark for this kind of stuff.

I have never heard of the “Rasterization to resolution scale” your referring to please provide a link?
Quick history lesson, as far as i am aware, Rasterization is a solution to approximate what a Ray traced scene looks like! :tada:

That’s all, if I need to add anything here I will!
Kudos