Motion Matching PSD (Pose Search Database) - Locomotion Coverage using Blendspaces vs Sequences

Hi,

We are currently exploring ways to increase the coverage of our Motion Matching databases. Our characters utilize Mover, and we plan to define minimum and maximum velocities per Locomotion gait (walk, run, sprint).

At this stage, we are populating our PSDs with Animation Sequences in a manner similar to the Game Animation Sample Project.

We are particularly interested in the use of Blendspaces to improve loop coverage (F, FR, FL, BR, BL, B) by including two samples that represent the designer-defined minimum and maximum velocities.

However, I haven’t been able to find documentation explaining the differences between the Sequence and Blendspace approaches. Would using Blendspaces actually increase coverage by selecting samples within them, or does the Sequence-based approach provide equivalent coverage?

If you have any insights into Blendspace usage within Motion Matching PSDs—such as recommended use cases, performance considerations, or overall benefits—we would greatly appreciate your guidance.

Thank you for your time and assistance.

Best regards,

David

Steps to Reproduce

Hi David, it is possible to use blendspace assets within pose search databases like you described, to improve animation coverage. You would add a blendspace in the same way that you would do with a sequence, and when indexing the database, the pose search system will index frames based on a combination of a given time and set of input parameters. Then, at runtime, when that pose is the best match, the system automatically feeds the required time and input parameters to the blend space asset to generate the desired pose.

However, there’s one big caveat with the use of blendspaces with motion matching, which is that the output of a blendspace is non-deterministic. The same input parameters can result in different output poses, depending on which sequence within the blendspace was the leader on the previous frame, and depending on the smoothing that’s applied to the input parameters. Before we implemented motion matching, this limitation with blendspaces wasn’t much of a problem. But motion matching relies on a consistent pose from a given set of inputs and this isn’t guaranteed with blendspaces, unfortunately. We do have plans to revisit how blending within blendspaces works, but it’ll likely be quite some time before that is done due to other priorities.

The current recommendation is generally to avoid using blendspaces with motion matching if possible. Best practice is to use sequences with discrete velocities, directions, etc, and if you need to increase animation coverage (to support jogs as well as walks and runs, etc), use animation modifiers to generate new animations, or use the trajectory export tool to export the required trajectory motion and fixup existing animation against that in a DCC tool (we covered some of this process in one of the Game Animation Sample livestreams - I can give you a link if this is of interest).

If you still want to use blendspaces for a continuous velocity, there are a few things that you can look at that may improve the behaviour. The first would be to limit the number of sequences within the blendspaces. And you can also disable the smoothing functionality via a few properties on the blendspace:

[Image Removed]There are alternative options as well, which you can use to vary the output from motion matching, after a pose has been selected, to give more continuous velocity. One option is to use blendspaces like you described (ie, walk + run forward) but only sample for one of those velocities. So you would sample all of your blendspaces just at the run velocity. Pose search will then select the blendspace when it’s the best match directionally at runtime, but you can then override the parameter on the blendspace controlling the velocity to alter the speed of the output. To do that, you can change the sampling method on the blendspace asset in the PSD to only sample at the point you want:

[Image Removed]And then on the motion matching node, you can expose blendspace parameters as an input pin (and specify that you want to override the blendspace inputs on a per-frame basis):

[Image Removed]The problem with this approach is that your trajectory will then not match the actual motion of the character, so you will likely need to modify your trajectory generation to scale with the change in velocity of the output from the blendspace.

Similarly to this, another option is to just use a Stride Warping node within the anim graph, as that will apply a play rate scale to the output of the motion matching node. Again, you would likely need to modify your trajectory generation to account for this.

The last option would be to use the Trajectory Speed Multiplier property on the Pose History node. From what I remember, this will scale the trajectory and the play rate in one step.

[Image Removed]Happy to discuss any of this in more detail if you have questions.

Thanks,

Euan

Thank you!