Good Afternoon All,
Thank you in advance for looking at this. I have a functional solution but I am curious what the performance ramifications, if any, are.
I have a AI that has three basic components.
- The Brain Object which is an Actor Component . This component has a TArray of possible actions.
- Base action class that has derived classes for different actions (work, sleep, fight, etc. etc. Each of the action class’s have another TArray of possible considerations or thoughts.
- And lastly BaseConsideration class that the considerations are derived from. ( Am I hungry, am I tired, Am I under attack. etc. )
What I have working right now is a system where each action child class has a blueprint made from it so BaseAction->WorkAction->WorkAction_BP and I am adding the BP to the array of possible actions vs the WorkAction Class itself. I do something Similar with the Considerations being added to the Action BaseConsideration->HungryConsideration->HungryConsideration_BP.
There will be little to no code run in the BP itself. I just like the ability to more easily assign actions and considerations as well as make it easier to tweak certain variables/curves/etc.
My question is if I have 1000 actors running around with this system attached am I loosing much, if any ,performance with those nested BP components vs doing it directly in the C++ code itself. Something like just going into the WorkAction code and in the Array of considerations going TArray.Add(Add consideration1, 2, 3, etc.).
I realize that my ability to program efficiently (which is absolutely in doubt) will likely have more of an impact on performance then using the Blueprints. I am just curious what more experienced individuals think is the better approach. I am trying to simulate a very high volume of actors eventually and would like to start with the best foot forward.
Regards,
Masoric.