Hi, so i got a few questions about some performances from blueprint, since i need my blueprint very clean i tent to use macros, more specificly “Macro library”
Since i am using them and collapsing many nodes into smaller peacies i was wondering about the performance effect
Questions for some samples of performance impact
Does making macros effect performance in any way? Even if i had like 100 macros in one blueprint?
Is using checking for tag for an object every tick bad? If so how mutch?
Is using an “Enum switch” for string, name, vector etc etc bad for performance?
Is updateing a variable every tick bad? Example to use a variable thats gets updated from “Event tick delta” so i can use it, instead of dragging delta tick to a node
Are raycast bad every tick? how many plain raycast that breaks its result and update variable, does it take to really impact performance
Is getting all actors of class and getting a ref with index instead of looping it bad for performance?
This recent topic went a bit off topic, but there are some test results inside:
Now, this is some speculation. Blueprints are translated to C++ level, that is machine languge (after compilation) with some extra code.
This is very simplified, kind of like it was 30years ago, but it can give you idea:
For eg. ADD operation for 8086 family cpu (intel or amd) is probably around 100 clock cycles. Now C++ part that is getting arguments, storing them and calling function to ADD then returning back etc. This stuff i guess takes about 1000 clock cycles. Next blueprint level (layer) probably takes about 5000-10000 cycles per single blueprint node executed (i am guessing here).
So actual work done for your code takes 100 to 500 cycles per simple instruction. C++ adds 1000, blueprints add at minimum 5000 more. I think that is why, what you do inside blueprint node does not really matter. It is at best 10%, and probably around 1% of clock cycles of whole single blueprint node thing. What matters is number of blueprint nodes executed.
So back to your questions:
number of macros does not matter much, what matters is number of EXECUTED blueprint nodes inside all of them.
checking for tags is tricky without looking into its C++ code, if its string comparison then that is bit more costly than simple instruction, but not much. real work done is probably much less than all code for C++ and blueprint layers around. Unless unreal calculates all tags into some hash number and compares them as int, then tag or no tag does not matter.
enum switch is comparing integers (all enums are integers) so again does not matter.
updating variables is even faster than things like ADD or multiply, again does not matter. Guessing now but if you drag delta node (or any variable), then unreal anyway passes pointer (by creating variable that points to it), then reading it for node at other end.
raycasts yes are first thing that do matter, just don’t abuse them and all be fine (like don’t do 500 actors each casting 100 raycasts)
this was tested in topic i gave link at beginning.
About optimization of code:
most interactive things in game can be done 5-10 times per second, player will not notice difference. So if you have huge number of actors that need to process some code do them in batches, and about 5 times per second for each batch.
modern cpus are really marvel, do not worry about what nodes you use, worry more about how many times you use them.
packaged and runtime blueprints are way way faster than run in editor.
there is profiler and other debug tools, watch tutorials and check yourself what parts of code have most impact on speed.