Hello everyone! I want to upgrade from 5.4 to 5.5, but I’ve noticed that the performance in 5.5 is terrible, and I can’t figure out why. With absolutely identical settings, there’s a huge difference in draw, rhit, mem, and vmem.
Low settings:
I tried looking at the Unreal Insights report, but I don’t really understand where to look or what to focus on to identify the issue.
For example
Frame 720 in 5.5 seems there are a lot of things happened
There seems to be a lot of slate draw commands. Maybe try changing some driving logic more to material functions (via masks etc) that can help offload the CPU.
Do you have a lot of logic in tick functions? Your tick time is very high. It seems to has some times even tripled in the case of lag spikes in 5.5.
UE5.5 also uses a lot more vram so it could be taxing the engine with higher calculations (I’m guessing they tweaked lumen & raytracing maybe pushing some parameters too far). Maybe some console variable settings can fix this?
same. But curious thing about it, its random When i press play, its 60 fps. When i stop and press play again, its 8fps. When i stop and press play again, its 60 fps… there is no repro for it.
Thank you for your response, but I don’t understand why this issue isn’t present in 5.4 — if it’s happening in 5.5, then the problem lies in the engine itself.
I disabled real-time thumbnails, but I doubt it will make any significant difference.
I do have some logic in Tick functions, but keep in mind that this performance drop is observed in the editor. In a build, performance would be 3–4 times better. Additionally, it’s clear that in 5.5, the game thread time has also increased by almost 4ms.
No settings were changed; the projects are absolutely identical. I made a copy of my project and migrated it to 5.5.
In my case, the loss of performance is stable, I even removed everything from the scene except for the only actor controlled by the player, for the sake of interest, the performance didn’t improve at all.
World tick time seems to be slowly rising over the versions ever since 5.0 .
That input parameter seems unreasonable high too. It’s a new addition to the profiling.
I’ve noticed that if I drop the quality settings then the lag on input seems to drop (High settings seem ok anything above seems to cause the input to spike)
Edit: I’m guessing the input does not reflect button input.
I expect some performance drop compared to 4.27; however, I’ve never experienced such a significant decrease before, and I’ve upgraded to every new version. I thought it might be a simple bug that would be fixed in 5.5.1, but either the issue is more complex, or Epics just doesn’t care.
I don’t even understand what this parameter represents. Even when I don’t press any buttons, it still shows almost the same or the same value.
This is laughable.
Not because of your (right) expectation, but because its a known fact.
Essentially, every new major engine versions drops by around 20% performance on the same hardware it used to run perfectly fine before.
Epic does this on purpose.
Guessing you never updated engine version on the same peoject before…
Its not.
Its the engine’s rendering pipeline, poisoned by incompetent developers doing things they shouldnt be doing; lumen, nanite, etc.
Thats an issue. if you are CPU bound - that usually means you are doing stomething stupid somewhere that was set up as a test and got left in place;
mind you, this could also be true for something that Epic employees did in the build.
(Notice the if. I dont think you are)
But the first thing you should look at - regadless of what you assume the issue to be - is the engine settings.
Make sure they match up between the old and new.
This used to happen with silly s*it like distance field going disabled for instance.
The second thing is to take an Axe to the new features, so that you have what you had before.
The third thing is to go in and actually check for the silly mistake in the scene actors/level bps, etc.
The 4th thing would then be to do a proper benchmark and pick out things that are an issue.
I’ll give you one right off the bat; your minimap scene capture and its settings.
I did it every time, I started this project on 4.27 and then moved to 5.0, 5.1, 5.2, 5.3, 5.4, and finally tried to do the same with 5.5 but this version is the worst I’ve ever experienced.
Settings are the same, I copied my 5.4 project and converted it to 5.5. I even manually checked some of them.
I didn’t get what you suggested
Nothing, I even deleted everything on the scene except my player actor, didn’t notice any difference
Tried but I don’t understand how to find the reason, my project is quite large to trying switching on/off features to find it. If it won’t be fixed I will stay on 5.4
Settings are no but I thought about minimap RTT, turned off it and it doesn’t make any difference. Since I tried different methods and nothing helped, that’s why I think Epic released this version with stupid bugs that I expected them to fix in 5.5.1.
And even if the problem was in the RTT or in settings, then why is everything great in 5.4, but so bad in 5.5? This can only be due to the lack of competence of the engineers responsible for this version.
Hey man, sub 60fps within Pie is anything but great.
Matter of fact, if you are working off a 4090 and your overall screen is not 4k, that near 60fps means your final resolution for 60fps is probably 1080p or less, which in 1024 is rediculous. Like anything else that’s made with unreal really.
That’s always a guarantee. They have 0 idea what they are doing at epic - and thats proven by the fact they keep pushing forth with TAA as the default.
Anyway, other than the tips I shared i have no idea what the real issue could be. I dont use the latest engines, nor do i upgrade any of the projects started in other engines ever, thats just a bad practice to do anyway…
Do you notice that when you run the game in the editor it only uses one thread?
(In my case that happens).
When running standalone it uses six threads!!
That’s why the performance in the editor is horrible and in standalone it’s great.
I think the engine developers need to work more on the multithreading issue.
Because there is little that can be done outside the game thread.
I have a machine with 64 threads and it only uses one… in the middle of 2024… I think I could do the same with my pentiun 4 from the year 2000 (There really is no difference).
@Ivan3z Rendering is on another thread, Audio & physics is also threaded, some ai might be async (so same core but split up over time). The core with a 100% usage is probably game logic.
You have 7 cores that are at around 28-40% usage. The sub systems just don’t need to use a 100% of the other cores as their calculations do not max out the core’s possibilities.
Your Xeon is also not boosting to it’s normal speeds it should be able to push to around 3.6 GHz. It is below the base clock for some reason. Is your CPU in some sort of power saving mode?
When did you last replace the processors thermal paste. It might have hardened and is not transferring heat properly throttling your CPU.
The maximum I have seen it reach is 3.2GHz (between 60 and 70ºC). I don’t know under what conditions the turbo boost is activated but I have never seen it at 3.6GHz
I’m using carbon fiber (thermal grizzly) I think the temperature is not a problem… plus I have server coolers. They cool very well (And they make a lot of noise too)…
Well I don’t know… I’ll have to look into that… but I don’t think so… I’ve put all the cores at 100% many times… but I’ll check it anyway.