Too many AI - Slow FPS - Is THIS a solution?

I can only have about 8 AI enemy characters before I start to notice an incremental drop in FPS and was wondering if the following actions would be a solution.

The enemy’s blueprint is huge. There are a LOT of nodes. Event tick has little to no effect on FPS when deactivated, so if I were to instead focus on moving all nodes I can to another “enemy master” blueprint. So instead of x number of enemies running around with all this code, I would instead have it contained within a single master blueprint that would be used to communicate and reference the individual enemies.

Would this make a difference?

No, all child classes would inherit all the code. You need to refactor the code to be as performant as possible. Profile the code and see what’s causing the increase in frame time.

Yes, but it’s not a child class. I’m created a separate blueprint that holds all the code that each enemy AI would borrow from when needed.

Same applies. If the code is the issue, then when you loaded it … its loaded. All you can do with code is refactor it. Clean it up, optimize it etc. Once there, you can go to C++ for more optimization/performance.

So having all that code stored in 1 bp would give the same exact performance as that code stored on every AI? What you’re saying, is that the number of nodes/variables/functions inside a blueprint doesn’t effect performance unless it’s actively calculating something…?

No, The code as a whole has to be loaded into memory. Regardless if a node is being used or not. If I write 20 functions, but only use one or two per frame I still need all 20 loaded to memory.

If you have a single class that all instances are derived from, then that’s one instance of the code library in memory. If you have 10 unique classes then that’s 10 class libraries loaded to memory. Regardless if they use duplication of functionality.

e.g.

BP_EnemyAI, 5 instances of the class are spawned in the game. Each client plus server will only load one instance of the class for functionality.

vs

BP_EnemyAI_1, BP_EnemyAI_2, BP_EnemyAI_3, BP_EnemyAI_4, BP_EnemyAI_5 … one of each is spawned in game. Each client, plus server will load one instance for each enemy AI class. So 5 classes loaded to memory.


edit for inclusion …

This does not apply to variables. Each instance (object reference) will have it’s own unique array of variables.

BP_EnemyAI, 5 instances = 1 instance of the class in memory, 5 instances of the classes variables. One for each instance.

I suppose it depends on what is meant by “a single master blueprint”. As Rev0verDrive notes, moving all the enemy’s functionality “up the hierarchy” to a parent class won’t impact performance.

BUT, if what you mean is generating a sort of “AI Brain”, then that actually could. The reason being that multiple AI actors are all performing individual checks, which may be redundant; if you have a single “brain” actor which performs the checks one time and then distributes that information to the AI, at least in situations where they don’t need per-actor checks, then it actually would increase performance. This is, in fact, a somewhat common strategy for games with many coordinated AI actors.

For things like pathfinding, for instance, you can’t really use a Brain. Each actor must determine its own path because it exists at a unique location in the world. But suppose you had actor-specific queries for the player; things like complex trace tests which determined whether the player was near cover, or whether the player was moving toward some critical location, or whatever. Those kind of checks are “enemy agnostic”; they don’t involve any sort of logic which depends on the specific state of the enemy itself. So localizing them to one blueprint (a Brain actor, or performing them on the player and broadcasting them out, etc) would remove a lot of redundant calculations and let the enemy AI operate only when it needed to make decisions about things unique to itself.

The thing you have to consider is, how many of the checks and actions you’re performing are REALLY “ignorant” of the state of the enemy in question? Querying things like the state of the level or actors in it can be passed along from a single central source to a host of AI actors, but anything the AI does which is in any way dependent on the state of the AI itself is basically going to have to be handled by the AI. Having a master Brain actor check the status of each individual AI actor, perform checks based on that data, and pass them back TO said actor won’t help you. It’s only useful in cases where the result doesn’t change, so what you’re essentially doing is removing a lot of duplicate calculations by performing them once and sharing the result.

Exactly. Brain based functionality would be a class loaded in the game mode for example. Said class would do a sweep of functionality per frame. Reliant actors would reference the results as needed.

Func: get location of all actors, store in array struct[reference, Location]. AI class -> get game mode -> get Pawn Locations

Not sure how you coded your AI, but blackboards are created for exactly this reason, they are much less resource hungry than blueprints, so for AI use blackboards.

Moving code to single parent doesn’t matter as long as AI will run it anyways. In general don’t run performance heavy functions on every tick (i.e path finding, GetActorsOfClass, 1km multisphere trace).
Also make sure it’s the bp that is the cause and not the rendering, like enemy weapon having 1M polys (true story).

Yes, this is exactly what I meant. I’m not using blackboards btw. Perhaps I should.