As it currently stands, the AI Agent configuration is all done in blueprints, but all the state/action/goal code is non-UObject, so you need to get into the C++. So a C++ coder creates a library of Goals, States and Actions, and the designer just assigns a variety of these to a character blueprint and the framework takes care of the rest.
There’s only really two bits of code you need. 1) Each state needs to implement an ‘Evalute’ method that evaluates itself against the world or agent. 2) Each action needs to implement an Execute method that deals with actually performing actions in the world, running animations etc. The framework takes care of the rest.
Ultimately I do want it to be completely extensible via blueprints…but that’s a challenge I will get to a bit further down the line.
During testing at the moment, instead of having AI’s running around murdering each other, I’ve made them farm and gather food I’ll probably start building out an FPS package with the usual attack, hide, patrol, type behaviours once the framework is finished.
It would be nice to have bundled basic goals/states/actions so that folks like me could create something tangible and perhaps eventually build upon it (or someone could be hired to build upon it in C++).
Good luck with the project! Hopefully it will make it to the Marketplace eventually!
Been busy recently, and my internet has been playing up :mad:
I’ve pushed a branch with the basic working implementation. It has a FSM with Idle/MoveTo/DoAction states and a very simple depth-first planning algorithm which just takes the first valid path here :
It’s very crude and not in the slightest bit optimised, about as slow as it can be, but it works.
I’ve kinda paused now because to start optimising it I need a decent suite of actions/states, so I’m scribbling down game ideas to build up around it. I might just do the usual first person shooter type setup to keep things simple.
I just have one goal ‘Be Nourished’ (keep health above 99), two actions - Gather Food, and Eat.
@mid_gen - So I was looking at implementing GOAP from scratch just for the experience, and was wondering about some implementation details in your version.
Namely, I was thinking of setting it up so that PerformAction is a BT_Task node that lets you reference an action object, and then using the BT for the FSM, e.g., set up Idle and Move states in the BT, with PerformAction being another BT_Task node.
Another thing is that it seems from your devblog that you’re doing your own pathfind, curious why do that instead of hook into the normal pathfind.
Edit again: The above paper actually implements planning as an extension of the BT using a similar method to the one I propose above. The part of the paper in question is “4.3 Implementation” where it mentions the “Planning Node”.
I’ve never looked into the BT implementation in UE, if it’s possible to re-use an Action class from there I should take a look into it! Thanks.
The planner is just a pathfinding algorithm, except instead of finding a path through a navmesh, you’re finding a path through world states…so in the devblogs when I’m talking about pathfinding, I’m talking about the planner I use the standard AIController stuff for moving the AI Agent. The current pathfinding algorithm in the planner is just a simple depth-first. You can use whatever method you want, A* etc.
Edit again: The above paper actually implements planning as an extension of the BT using a similar method to the one I propose above. The part of the paper in question is “4.3 Implementation” where it mentions the “Planning Node”.
[/QUOTE]
I love you so much for this. I was looking at implementing this into UE4 for a personal project of mine as well. Have you looked the planning queue at all? I know the reason why FEAR had so few AI on screen at a time was because of how hard they were hitting their planner, once every couple of seconds or so. They also had to culling of their GOAP actors as well, so the Rats at the beginning of the first level were still planning even if the player was in another part of the level. Why I mentioned this was that because the more actions the AI could put into their plan queue the faster the system ran with more AI at the same time.
Thanks for doing a lot of legwork here, I will definitely keep an eye on it!
@mid_gen - Trying to compile the project from Github, I get these errors:
> Engine\Source\Runtime\Core\Public\Templates\SharedPointer.h(548): error C2440: 'initializing': cannot convert from 'GOAPState *const ' to 'GOAPAtom *'
> Engine\Source\Runtime\Core\Public\Templates\SharedPointer.h(548): note: Types pointed to are unrelated; conversion requires reinterpret_cast, C-style cast or function-style cast
> D:\ChromeDL\userimages\GOAPer-master\GOAPer-master\Source\GOAPer\Private\GOAPAction\CreateFoodAction.cpp(15): note: see reference to function template instantiation 'TSharedPtr<GOAPAtom,0>::TSharedPtr<GOAPState>(const SharedPointerInternals::FRawPtrProxy<GOAPState> &)' being compiled
> D:\ChromeDL\userimages\GOAPer-master\GOAPer-master\Source\GOAPer\Private\GOAPAction\CreateFoodAction.cpp(15): note: see reference to function template instantiation 'TSharedPtr<GOAPAtom,0>::TSharedPtr<GOAPState>(const SharedPointerInternals::FRawPtrProxy<GOAPState> &)' being compiled
> Engine\Source\Runtime\Core\Public\Templates\SharedPointer.h(548): error C2439: 'TSharedPtr<GOAPAtom,0>::Object': member could not be initialized
> Engine\Source\Runtime\Core\Public\Templates\SharedPointer.h(822): note: see declaration of 'TSharedPtr<GOAPAtom,0>::Object'
Any ideas? Was there something committed only part way?
It’s hard to tell whether the Preconditions in GOAPAction class should be GOAPState, or if the IsFoodAvailable (etc) calls should be changed to return a GOAPAtom instead.
Edit: This ended up compiling though I don’t know if it’s right:
IsFoodAvailableState* foodStateFalse = new IsFoodAvailableState(false);
IsFoodAvailableState* foodStateTrue = new IsFoodAvailableState(true);
// Only perform if there's no food
PreConditions.Add(MakeShareable<GOAPAtom>(&foodStateFalse->Atom));
// Make food!
Effects.Add(MakeShareable<GOAPAtom>(&foodStateTrue->Atom));
Edit again: Launching the project after successfully compiling, it gives a warning about missing a cursor asset and missing the TopDownCharacter asset.
Edit again again: Seems like nutrition value never decrements for some reason by default.
Edit final: Added a navigation mesh, a couple of GOAP actors and some food, but they don’t seem to move or do anything. Any ideas?
Edit final again: Got it to do something, but the bots still don’t move to the food.
I can see it changing to DoAction -> GatherFood, but it never moves.
So I’m back actively developing this again now after finishing up work on my terrain generator.
I’ve made a fair few changes:
Moved framework code into plugin (GOAPer)
Removed GOAPState class, framework is now not concerned with evaluating state
Game module implementation of AGOAPAIController is now responsible for updating state
Slowly building out example project, just added AIPerception into the controller implementation.
The repo is now private as I have non-shareable assets in there (AnimationStarterPack). I’m also considering putting the plugin on the marketplace at some point once it’s finished.
I’ll open up the repo to some testers in future to get some feedback. In the meantime, I’m still updating the devblog :
Would love to take another shot at it @mid_gen, I’m really curious to see if I can use it to generate quests according to a strategy in some whitepapers I’ve been reading, but I couldn’t get the basic repo working last time.
Yeah sorry I had committed the cardinal sin of pushing a broken state to the repo.
There was a lot of really, really bad code in there anyway so just as well
If you pm me your github username I can add you as a collaborator so you can have a look.
The plugin has some pretty fundamental limitations at the moment, namely it only supports single state goals, and just takes the first plan it can find rather than looking for an optimal one.
I’m building out a decent sized set of actions for the game module (it’s a shooter type game), then once I’ve got a decent sized graph to work with I’ll start completing the planner functionality.
Good progress! Lots of refactoring and code cleaning up done.
Planner has been completely rewritten. I’d taken a slightly wrong interpretation of the papers I’d read when researching the method, so I was constructing a graph where nodes consist of a single state Atom (e.g. HasWeapon : true). New graph build algorithm looks like
This gives a graph where each node is a complete set of world state, where the state = current world state + all the effects that have led to this node. From this graph we then select any nodes that satisfy the goal state as path candidates.
Walking back up the parents from each candidate gives us a list of valid plans. Now we just have to choose which one to use.
Epic work mid_gen, I think that refactor (world state + effects that lead to it) is a key piece of what’s needed to apply the quest generation techniques. Haven’t had any time to work on it yet but this is great
Building out more actions in the sample project now to see how the planner copes. Fixed a few issues with circular paths and pathing beyond target state.