The scenario: In my game, the player can grab enemies and swing them around. When he does, the enemy should assume certain poses in response to being swung. The nature of the swings is such that the way the enemy is attached to the player will change during the animation.
i.e. The idle/walk has the enemy held by the torso. When the player presses attack, the player quickly tosses the enemy, catches him by the leg, and swings him in a big overhead arc. The enemy should assume an arms-up position and bend with this swing.
My question is: what is the best way to animate this? I can animate the desired moves of the player no problem. But how best to animate his interaction with the enemy? Is there some sort of standard workflow?
My gut tells me to create a bone (as I’ve done for my other weapons) which will act as an animate-able connector/socket for the enemy mesh, and then animate them both in the same scene with the enemy’s root constrained to this bone. Then, I can move the enemy around by moving the bone itself, and adjust his pose… Then simply export the two animations separately, one for the player and one for the enemy, and trigger these animations by Attaching the enemy’s actor to the new bone and playing both actors’ corresponding anims simultaneously.
But I’m concerned about synchronization, and the workflow seems pretty roundabout… and I’m wondering if anyone can offer insight on how to animate the interaction between two skeletons inside the game itself, especially where the interaction is not just playback of a prerecorded sequence of animations.
Depends what app are you using to do your animations and is if for in game real time or as part of a cut scene?
Either case as long as the root node is 0 0 0 then the two will sync up in relationship to one another.
You don’t need any sockets as the trick is to snap the root/reference node to share the same space in world space for a given event. Lets say a player A preforms a side kick but player B is out of range then nothing happens but if A is with in range then the event is triggered.
Syncing is not the hard part though figuring out the event trigger is.
As for animations if using Motion Builder, as I do, then one just needs to do the animation and then the results can be re-targeted to the rig being used by the game and saved as a A-B sync that can be switched around and used with any model that uses the same rig and animation set.
It’s non-cinematic in-game commands. And that’s the issue; the player will be able to carry the enemy around in-game, and transition to these attacks, so it’s not possible to simply create an animation where the two skeletons move about some arbitrary single origin point. The enemy must be defined in relation to the player’s location and rotation.
Well that’s what having a root/reference node is for. It’s not the player model that is moving as far as animations goes but the root node relative to the bounding box and it’s always 0 0 0 local to the player model no matter it’s location in world space.
To animate a sync the root/reference would always remain at 0 0 0 relative to one another and the player models will do their thing animated in world space. It’s a fixed value that never changes so even though you could add sockets your just adding whats already there.
For the matter the only practical reason for having such a root node is so that the rig has a point of reference as to it’s position in world space and the root has always been used to match things up using a single reference point. Be it 2 or 10 this is how it is usually done before getting into more complex AI based solutions.
Well, the problem with that is that I’m using animation blending on the arm, and I want the enemy actor in most cases to ALWAYS be constrained to the hand. So if the enemy is defined in relation to the player root (0,0,0 in relative space) then he’s always going to remain in that position no matter what the mesh does… and that means creating separate swing/sway animations to make the enemy track the player’s hand for every possible configuration of his arm when blending anim states. I realize I probably didn’t communicate that clearly enough; if I define the enemy in relation to the player’s root, he only moves when the player’s root does, and I really want him to move far more than that, since I want the player’s animations to drive him around, though not always in a totally scripted fashion (i.e. it’s not always an exact predetermined sequence of events that’s playing out, it’s based PARTIALLY on the player’s blending when moving around).
What I ended up doing was something similar to what I was originally describing with the enemy being defined in relation to a bone and using constraints on him. It winds up working like you’re describing, except that the enemy is defined in relation to a non-zero point on the player. It admittedly DOES make animation more complicated (as the player moves the constrained hand in ways that aren’t meant to move the enemy, I have to work in some offsetting animation on the enemy’s root), but the state blending is so much smoother.
I do appreciate the feedback though! Even though I didn’t do it exactly, knowing that “build the animations in a space that shares the same constraints as the game will, and then export those animations individually for each skeleton” is the Right Way To Do It enabled me to move forward with this!