Question About Using a Single Long Animation Sequence for Motion Matching

Hello Epic Team,

I’m currently applying Motion Matching using the GameAnimationSample, and I noticed that the sample is structured with multiple animation sequences divided by motion type (e.g., start, stop, turns, etc.).

However, I’ve seen several examples online — such as the “dance card” approach — where a single long animation sequence contains all motion variations, and Motion Matching is driven using only that one sequence.

I have a few questions regarding this method:

  1. Is using a single continuous sequence — without splitting motions into separate animations — an acceptable or recommended approach by Epic?
  2. What are the potential drawbacks or limitations of using Motion Matching this way?
  3. If this method is not officially supported or recommended, could you share the reasons why, and provide any related guidance or best practices?
  4. I would like to apply the Motion Matching system using a dance card-style long sequence. Is there any official documentation or guide from Epic for implementing this specific approach?

Here is an example video I’m referencing:

:backhand_index_pointing_right: https://youtu.be/BBlUrJmGCk0

Thank you very much for your time and insights.

Hi, yeah, the Unreal approach to Motion Matching is based on having many animation sequences that provide motion coverage, from which we can then select a pose rather than one single continuous animation containing all the frames.

The benefits are that this allows us to guide the pose selection more easily than would otherwise be the case with a single continuous animation. We can create movement sets via databases, which restrict the pose selection to a subset of the assets. For instance, are we in steady state locomotion or performing a start/stop. It also allows us to leverage existing functionality around marking up animations - is an animation designed to loop, or is it a one-shot, and it allows us to use notifies to markup frames we want to sample, frames we want to block reselection of the pose on, bias reselection, etc. It also makes it easier for us to leverage and create tools to debug motion matching setups than would otherwise be the case.

In practice, the approaches we’re aware of that use single continuous animation for motion matching still provide similar context to help pose selection. But they’re done by marking up that single animation. So marking up walks, runs, pivots, loops, etc. With those systems, you’re essentially doing similar work to the Unreal approach, but in a different format. It’s not just a case of taking effectively data straight from mo-cap, dropping it into a motion matching solution and having it generate an accurate pose.

Another advantage of using individual sequences is that it makes it easier to avoid redundant duplicate data. It’s easier to know with clips that we have one animation running straight forward, one running at 45 degrees, etc. But that’s more difficult with a single continuous animation. And when you have duplicate data, you’re more likely to get animation shredding, where you jump from frame to frame within the animation rather than playing out the animation sequentially as would usually be preferred.

If you have GDC vault access, you might be interested in taking a look at the Motion Matching presentation that Naughty Dog gave a few years ago. It’s similar to the approach we’ve taken and discusses the advantages compared to the traditional approach.

In terms of whether you could go with the approach of having just a single continuous animation and using that with Unreal’s motion matching system, it’s possible, but we wouldn’t recommend it. As you can see in the video, you’ll be able to get 80%-90% of the way to a good result. But it’ll be challenging to get the final 10%-20% of the quality you want because you can’t refine the pose selection in the ways I’ve described. You could make the schema more complex to help with that, but it may become unmanageable. If you wanted to go with this approach, you may need to implement the kind of tools that I mentioned above - some way to markup sections of that animation to inform the pose selection.

Hopefully, that provides a bit more context. Happy to talk about this more if you’d like.

The issue with AI-controlled characters tends to be due to the pathing generated by the AI system. The Motion Matching implementation will base its output on what the movement component is doing. The problem comes when a navigation system generates paths with very sharp changes in facing direction (think of the path moving directly forwards, then immediately pivoting 90 degrees to navigate around an object). When Motion Matching gets that input from the movement component, it will respond accordingly, and you’ll see a discontinuity in the animation as the facing direction changes by 90 degrees. The solution to this is to generate smoother AI pathing where these kinds of sharp turns are smoothed out into arcs. If you’re able to do that, Motion Matching should give just as good results as with a player character.

Good to hear the information was useful. I wanted to follow up quickly since I talked with one of the dev team that’s currently working on integrating motion matching with an AI system. The only extra information they had, apart from having AI generate pathing data that doesn’t contain discontinuities in facing direction (and also acceleration which I didn’t mention previously), is about trajectory generation.

If you look in the Game Animation Sample (GASP), you’ll see that we have the Update_Trajectory function in ABP_SandboxCharacter. This generates the trajectory data that’s used by the motion matching system. In the case of GASP, we do most of the work to generate the trajectory data by calling into UPoseSearchTrajectoryLibrary::PoseSearchGenerateTransformTrajectory. If your AI system is using CMC, this wouldn’t change much for an AI implementation vs the player character implementation in GASP. But if you were using Mass for AI navigation, you would likely need to reimplement this functionality to generate the trajectory data with pathing/movement data taken from Mass. We also strongly recommend that this work be done entirely natively, and not in the anim blueprint event graph (it’s only shown in the event graph in GASP for simplicity).

I just wanted to mention that for completeness. Hope it helps.

Thank you. I learned a lot thanks to you.

I have an additional question.

Is it okay to apply Motion Matching to AI-controlled characters?

Since AI usually moves using the Navigation System (such as NavMesh), rather than input-based movement like a player, I’ve heard that this can cause issues when using Motion Matching.

Are there any known limitations, considerations, or best practices when applying Motion Matching to AI movement? I would greatly appreciate any insights or guidance on this.

Thank you so much for teaching me something new.