Hi, yeah, the Unreal approach to Motion Matching is based on having many animation sequences that provide motion coverage, from which we can then select a pose rather than one single continuous animation containing all the frames.
The benefits are that this allows us to guide the pose selection more easily than would otherwise be the case with a single continuous animation. We can create movement sets via databases, which restrict the pose selection to a subset of the assets. For instance, are we in steady state locomotion or performing a start/stop. It also allows us to leverage existing functionality around marking up animations - is an animation designed to loop, or is it a one-shot, and it allows us to use notifies to markup frames we want to sample, frames we want to block reselection of the pose on, bias reselection, etc. It also makes it easier for us to leverage and create tools to debug motion matching setups than would otherwise be the case.
In practice, the approaches we’re aware of that use single continuous animation for motion matching still provide similar context to help pose selection. But they’re done by marking up that single animation. So marking up walks, runs, pivots, loops, etc. With those systems, you’re essentially doing similar work to the Unreal approach, but in a different format. It’s not just a case of taking effectively data straight from mo-cap, dropping it into a motion matching solution and having it generate an accurate pose.
Another advantage of using individual sequences is that it makes it easier to avoid redundant duplicate data. It’s easier to know with clips that we have one animation running straight forward, one running at 45 degrees, etc. But that’s more difficult with a single continuous animation. And when you have duplicate data, you’re more likely to get animation shredding, where you jump from frame to frame within the animation rather than playing out the animation sequentially as would usually be preferred.
If you have GDC vault access, you might be interested in taking a look at the Motion Matching presentation that Naughty Dog gave a few years ago. It’s similar to the approach we’ve taken and discusses the advantages compared to the traditional approach.
In terms of whether you could go with the approach of having just a single continuous animation and using that with Unreal’s motion matching system, it’s possible, but we wouldn’t recommend it. As you can see in the video, you’ll be able to get 80%-90% of the way to a good result. But it’ll be challenging to get the final 10%-20% of the quality you want because you can’t refine the pose selection in the ways I’ve described. You could make the schema more complex to help with that, but it may become unmanageable. If you wanted to go with this approach, you may need to implement the kind of tools that I mentioned above - some way to markup sections of that animation to inform the pose selection.
Hopefully, that provides a bit more context. Happy to talk about this more if you’d like.