Simply put my total scene MS is drastically lower than my GPU MS.
VSync is off in my driver settings (“Always Off”) for testing purposes (I prefer it on) so that’s not it. I also made sure to use the console command to disable it in UE4 just in case there was some weirdness there (so to recap it’s off in UE4 and set to ALWAYS OFF in my driver settings).
One or two of these were taken in the PIE window. The rest (lower) were taken in a Standalone Game (Still PIE but uh. Not)
As far as I know I don’t have any textures on any of my assets. I have 3 materials and the only one that isn’t an instanced material is the one for my rectangle player mesh stand-in. I have UV mapped my cubes (just no textures applied) and I’m using the shape_plane provided by Epic so I presume that that’s UV’d as well.
I reduced my resolution to 960 x 540 with no differences noted.
From the ‘stat unit’ numbers - you’re game thread bound, not GPU bound. The GPU gets idle if the game thread is taking too long to give it commands, and we can’t remove that idle time from whats shown in ‘stat unit’. When you do a ‘profilegpu’, you get the true GPU time.
In other words, work on optimizing your game thread and the frame time will go down. Making the GPU faster will not make the frame time go down in this case. The first step for game thread optimization is ‘stat dumpframe’.
Using the Session Frontend with the profiler (Stat StartFile Stat StopFile)
I have some… very oddly expensive references to self. Especially in my HUD. Like .600MS references. These are 1 Call per frame references as well. Additionally I created a “Self Ref” variable in my HUD and replaced ALL references to self with that variable. And I still get this really expensive reference to self problem. Is there any way to reduce this? Would writing it in C++ help? There’s also a “DrawStatsHUD” with some expensive stuff in there but I think that that’s related to debugging.
Throughout all of my blueprints typically 40 - 60% of the MS time is actually a “Self”. And almost always with just 1 call per frame. I realize that I can’t reduce everything just because of the nature of things (especially diminishing returns) but I’m already rewriting some blueprints and seeing decent returns (a rewrite of how I draw my health bars reduced my frame time by ~200ms). I believe that right now, with my game as it currently is, I should be able to get this game running at crazy high FPS on my computer. I don’t have enemies running around, nearly my entire game is static, and I don’t have anything crazy happening in my blueprints. My Total Blueprint Time is 3.047MS while Self is at 11.529MS ; Both are lower than my total Gamethread MS of 16.018MS But I’m trying to get my thread times low enough to run the game as it currently is at around 120FPS. I don’t see why that shouldn’t be reasonable given how simple my project is. Of course the Unreal Engine has a lot going on and it scales really well but I just wanted to mention what my goals are.
Edit: I want to get my total frame time down to 8.20MS ; That’s roughly 120FPS by my calculations.
Self means cost measured within the current event (not in any child events). As to game thread / blueprint optimizations, you might get better responses posting in a more appropriate section.