and which would you say is better?
From what I gather, they are two terms for the same thing. It’s a system for matching motion based on distance.
Distance to what?
As part of the tip of the iceberg motion matching I’ve seen is based on the use of root motion and since all movement is contained as data distance can be calculated in advance with in the cycle.
Completely different solution, to tackle the same issue, make the animated character feel grounded:
Distance matching use the distance from a specified point in space to define which animation to play and at which keyframe:
I’m 50cm from my starting point, and started running in the right direction: i should play start right animation and should be at the 0.35seconds keyframe.
Strength of this approach is that you can move the capsule as you want, and the animation of the character will adapt in consequence.
With motion matching there’s no notion of point in space, it uses a user input (move forward right at half speed), parameters for how ‘sharp’ you want the animation to follow user input, and a large library of animation with rootmotion, doing every movement possible.
For each keyframe the motion matching will decide to which other animation to blend to given the user input direction/strenght. There’s no AnimationBlueprint StateMachine in this approach if you will.
Let’s say you have a single animation with rootmotion which last 15minutes and during which you do every move possible. At 7min30 you moved forward right,motionmatching will play in loop those couple keyframes of forward right moving, then you move backward. Motion matching wil find the keyframe in the 15min animation at which point you walked backward and transition seamlessly at the keyframe it defines as optimal, from your move forward right to move backward.
Thank you for your responses, FrankieV and MaximeDupart. Is one method preferred over the other?
You only use motion matching if you have a motion capture studio.
The reason why Ubisoft is developing this is because they want to insert raw motion capture data into game, because:
1: They have the motion capture devices required.
2: Motion capture cleanup costs time + money.
3: Animation state machines aren’t flexible.
So basically animators are implementing animation quick without the need for a dedicated programmer. All they need are some basic functions to look up frames in anim curves imported directly from MoCap data, later on combined with IK rigs for real-time adjustments.
In this case the state machine is just a mark saying “when” animation jumps from one frame to another, more like a state tree. There’s no animation clip per state.
Everything is one BIG animation playing all at once.
Fast, cheap and simple to do. Animators happy, bosses saving money also happy.
Not really as either way what is outputted is based on the data already contained with in the animation as a state change.For example moving forward is usually displayed as a line trace of where the cycle begins and ends as to distance travelled and from there any kind of direction change can be calculated ahead of the actual state change so motion matching based on distance matching would be more accurate.
There is still not enough info as to the use of root motion or even if it could be a replacement for inplace as to network requirements so which is best is still a case of context of the project in which motion blending will be used