Which blueprint/nodes should I avoid to use to spare on the GPU/CPU

Which blueprint/nodes should I avoid to use to spare on the GPU/CPU.
Or is it getting better performance when you convert blueprint into C++?

Hello TheDayDreamer,

I’d say just be careful with the logic you are putting on ‘Tick’ (calculated every frame).

The console commands stat fps and stat unit can be used to monitor performance.

1 Like

Oof, that’s loaded question…here’s my two cents

  1. Avoid GetAllActorsOfClass (it should be removed from the baseline IMO)
  2. Seriously consider if what you think belongs on tick can be handled better with an event timer or better yet, an event dispatcher.
  3. Remember that all actors and components tick by default. Disable it on those which never need to (which is a large majority of them in practice)
  4. Put some thought into your collision setups. If you have collision (Query or Physics) turned on for a given actor, the physics engine is always aware of where they’re at and what they’re doing.

…and certainly don’t do GetAllActorsOfClass on tick!

That is always going to be true as blueprint incurs a bit of overhead due to the translation layer. That said, for a lot of things that overhead will be negligible. It just depends on the use case. As @Astrotronic suggested, stat and stat early. Learn to monitor performance early so you will be aware of when something you introduced breaks performance. Use BP for what it really excels at which is handling of media (visual, audio, etc.) and rapid prototyping. I along with others tend to prototype in BP first then move functionality into C++. Iteration times tend to be much faster in BP by a long shot.

Good luck!

1 Like

I am using UE 5.3 and I am now looking at the animation system with its multi threaded Thread Safe Animation Update function. You can’t do as much with it as the original Animation Update, but it offloads the work from the game thread. This is still CPU based, but it does improve overall frame rate by reducing the game thread work load.

I also found that the spline curve calculations can be expensive (e.g., finding the 3D position on the curve at a given distance into the curve). I prototyped it in BP, but I knew from the start I would have to replace it with a highly optimized C++ solution if I chose to go that route. What I mean by optimize, quite possibly use my own highly optimized methods of calculating the position, not even relying on UE5’s framework.

Isn’t there a way to push large scale parallel calculations onto the GPU (not talking about graphics/mesh/materials/textures) for even faster results? I know there is a solution by NVIDIA, but I don’t know if it is available in UE5.

1 Like

Thank you so much for taking your time to answer , Astrotronic🙏

Thank you so much gh0stfase1🙏

Thanks HawaiianGamer🙏

fun fact, GetAllActorsOfClass is actually not bad performance wise, since deep down it uses the class’ hash to get the objects :slight_smile:

Some relevant reading Inside UE Source: FUObjectHashTables - the magic behind GetAllActors* Nodes (casualdistractiongames.com)