Do AnimBPs have common architectures for a class of problems


I never managed to reason about a sophisticated enough AnimBP. I find myself stumbling on familiar questions over and over and so I began wondering if there a patterns or architectures to share when making your AnimBPs.

In my scenario, I am looking at an RPG character. There is enough resources online for creating a locomotion state machine to govern movement animations for the characters, but how do you go about layering or selecting other types of animations such as context action animations (pickups, emotes, etc). Would you rather create a state machine called “Action” and read a reset-able variable that tells you the character is trying to perform that specific action? How would you reset the variable exactly when the animation ends? Would you instead opt to play the animation montage directly from the character’s blueprint? What if some actions are additive while others are not - now you cannot just blend poses directly but instead need to perform a Layered Blend By Bone or Apply Additive before doing your final blend to select the final pose. Often I fall into the trap of thinking like a C++ programmer, simply wanting to express an idea such as:

  • If Action.TargetBodySlot = Torso → Play Anim using Torso Slot Node
  • if Action.IsAdditive → Apply Additive to some other base pose before proceeding

Those are just some questions that arise, so I was wondering if there is a common structure to an AnimBP that intuitively pieces these types of animation together. As an example, a simple “architecture” can be:

This should hypothetically yield a character that can walk around, perform blocking emote animations, all while having accompanying secondary animations for jiggly equipment on them and the likes.

an emote is simply a montage assigned to the proper slot.

Montages are a great and simple way to implement network friendly one-off animations. Over time, for subsequent projects, I find myself moving from a very animgraph-centric approach to using montages more and more.

My issue with animation blueprint is that there is an underlying assumption that the animation data is known beforehand - it feels like hardcoding an animation. Recently I discovered I can expose animations as pins for the Play() nodes, which helps makes things more data-driven. Also struggled to get transition animations to work nicely in the AnimBP, I have to track excess variables and maintain state changes properly, alongside having some insight on how they should all blend together, when all I really want to express is:

Play the Transition In Animation —> When Done, Play the Idle/Looping animation

I can achieve the above with animation montages because they support callbacks nicely as well. I am starting a new implementation today that relies less on AnimBPs and more on orchestrating transitions and blocking actions in C++ using Montages. If you have any words of warning, tips, or reassuring sentiments, now is the time to let me know. :slight_smile:

Best question ever. Design logic is a subject that really needs it’s own forum. :wink:

I’m of the mind that at the very least the animgraph part of the animationBP should be 100% data driven with state changes coming from the controller level. The movement component for example ready provides “some” context base state changes, like walking jumping ect, and since an actor can only be in one state at any given time it’s my opinion that it’s a better option to have the controller provide the necessary state changes with out the animBP having to play 20 questions in the event stage. You could say this is a form of giving the character a brain

Also as a (cough cough) video game animator I’m not a huge fan of making use of state machines as in my opinion the animation migration should not require an entry/exit argument and instead make use of blend by int replacing the blend by bool in your graph.

I do like the idea though of creating a collection of animBP logic as it should be the starting point of figuring out how the migration path should work based to the type of game being developed. The real root of the problem is not how things needs to be wired in based on know ways and means, aka best practice, but how things should be assembled in a modular manner.

For example our team is working on a run and gun, like Quake 3 but more tactical, that requires a lot of guns so the base need is for weapon driven animation based on the current selected weapon. The solution to the problem was to make a weapons BP that contains the requirements as to unique weapon requirements. The weapons bp containes all of the animations for the weapon, first person and 3rd person animations as well as required sound effects and weapon attachments as well as specifications. In your logic block the the weapon becomes active on selection and is append in the secondary animation block.Once again this keeps the animBP as being data drive as the data comes from some place else with out the need to be directly supported by the BP directly.

By the way how did you make your logic block? Is there and App?

Hmmm, I want to discuss this further. You bring up the example of weapons containing the actual animation data. This is where I am leaning right now - I have a set of actions, behaviors, items - whatever they may be - that need their own unique animations. They follow the same FLOW, but using different assets. This is what I found counter-intuitive in the AnimBP state machines; their default behavior has you apply a pose by using the Play node with a specific animation – but I normally do not know what animation should be played! It depends on the action, or in your case, the weapon. Ultimately this will force you to create an enum or int that you blend by, and that enum/int is supposed to encompass all possible animations that can be played - making a very huge and messy flow.

Not only is the above counter-intuitive and verbose; but every time you add a new weapon with a unique animation to the game, you need to add a new entry to the enum, recompile and reconnect your Anim BP nodes accordingly. Compare this to what I actually want to express:

That’s it… play the equip animation - whatever that animation may be!

So naturally as an inexperienced animation programmer, I have to wonder if the second method is not unusual, and whether it defeats the purpose of using an AnimBP. Moreover, am I paying a certain price - I can imagine for example this may be trouble if your characters have different skeletons; an animation asset in one may not work on another. So it seems the compromise here is: when animations are data driven, they’re usually for the sake of shared/common animations across all skeletons. Or maybe running under the assumption the game only uses 1 skeleton - which from what I’ve seen is not an unusual assumption.

Can you confirm the above conclusions?

Also, I drew the previous figure roughly in Paint.NET; but I generally use as with the figures in this reply.

Mostly thats because in your use case you do not use the animgraph at all but a different montage fired though blueprint.

there is no “equip” state at all. Almost ever. Because you jeed to be able to run around AS you equip.
so the equip montage is usually played on top of a specific bone chain (spine_01) and up.

Same for just about anothing else that requires some sort of interchangeability or modularity.

Take this as an example of what is possible.
In comments, click time link for twinblast retargeting.
the way it works is any plant or item you make (which in this case you can interact with) has its own specific animation that goes along with it, inclusive of hand ik in this case to adjust animations on the fly.

Its exactly the same idea with weapons. each wapon will have its own entirely different set of animations for the character that go along with it.

Be easier to do a show and tell
This is our current alpha animBP built on a “just make it work” design but the key points is how the migration path way is designed to allow the logic to scale. Another key point is how weapons are added as a key towards a data drive design as to creating an asset package that contains the unique requirements of a possesed object.

In theory a animationBP does not even need an event graph as the animgraph can easily be configured to handle the required data directly but is really a chicken versus the egg problem.

You have absolutely no idea how insightful that was. Makes you wish there were more higher level design discussions and tutorials out there. I feel validated on certain doubts I had about state machines, and this helped clear some misunderstandings I had about montages and slot nodes. Thanks for taking the time to upload this.

Your welcome.

If you are planning a totally data drive design I might suggest avoiding the use of state machines. They don’t scale very well as compared to a tree drive design.

Something else to consider is blueprints are just a means of information gathering so from a logic stand point it makes scene to do the evaluation in the controller BP and not in the animation BP.

After that it becomes a mater of developing you own personal rule set in the face of what would be consider best practice.

Blueprints are for morons like myself that has no clue as to C++ coding so there are rules to follow. If one is an uber coder in C++ there is not a single rule that can’t be broken. :wink: