After downloading Unreal Engine 5.5.4 from Epic Games and creating a new project, I made an empty open-world scene. I deleted the volumetric clouds, pressed F11 to enter full-screen mode, and made no other changes. When I used stat initviews, I found that the View Visibility cost was very high, around 5-6ms.
I then downloaded an older version of the engine, 5.3.2, and performed the same steps. I found that the View Visibility cost was very low in the empty open-world scene. I went back to 5.5.4, and after I disabled TSR, the View Visibility cost became very low as well.
It looks like the bulk of the time spent is on OcclusionCullPipe. Here is a related discussion talking about “SyncPoint_Wait”:
[Content removed]
When you say disabling TSR changes the view visibility time, are you also changing the screen percentage to 100% or keeping it as-is?
TSR is more expensive than TAA since it does has to do additional computations to make the image more temporally stable even with a lot of moving pieces. This is potentially changing the availability of the occlusion results from the previous frame causing your discrepancy.
Can you please double-check that Lumen is in the same state in both 5.3 and 5.5? It looks like it is disabled in your 5.3 trace, but is enabled in your 5.5 trace.
If this is the case, it would explain the time differences. Because Lumen causes the frame to take longer to render, it pushed out the point when Occlusion Cull results were ready, leading to the “SyncPoint_Wait” seen in the 5.5 trace.
Of course, I have confirmed the status of Lumen. It is enabled in both UE5.3 and 5.5 with identical settings. To rule out the potential impact of Lumen, I disabled it. The performance time in UE5.5 decreased after disabling it, but there is still a gap compared to version 5.3.
However, as the number of actors in the scene increases, this gap does not appear to widen. It seems to be a fixed overhead.
In our traces, we have not been able to find any frame time increase from Unreal Engine 5.3 to 5.5 in the default map, with Lumen disabled and no Volumetric Clouds.
[Image Removed]
In fact, the average GPU time decreased on average from 13.0 ms to 12.8 ms.
Can you check if the Screen Percentage is the same in both versions on your tests? Ours were done with the screen percentage at 100%.
In-editor, this isn’t the value of “r.ScreenPercentage”. Instead, check the viewport menu (three bars in the upper-left corner), and then under the option “Screen Percentage” → “Current Screen Percentage”
To make them the same value, you can enable the override, and change the value with a slider.
The Current Screen Percentage has been consistent at 74 for both. I tried adjusting it to 100, but there was no change. However, unexpectedly, after pulling the code from GitHub, compiling the engine, and reopening the project, the abnormal performance difference was reduced to only about 1ms, which is now acceptable. I’m not sure what happened in between, but it did resolve the issue I was facing.
Now that your issue is solved, can we now close your case? If the performance delta returns, or you need other assistance with this issue, you can always re-open the case.