Dynamic SampleTimeOffset for PoseSearchFeatureChannel_Curve

I’m looking to create essentially a PoseSearchFeatureChannel_Curve with a -dynamic- Sample Time Offset so that I can favour picking animation frames that have a particular value N seconds in the future. The current implementation appears to only support a static time offset which is essentially baked into the internal data when the asset is indexed, but it’s crucial for my use case that I can specify different values each time a pose search is performed.

I’m fairly new to Unreal’s implementation of Motion Matching so I’m unsure the best way to create this, or if there is something inherent in the idea here that would prevent it from working.

My overall goal is to have a Curve on each animation in the Pose Search Database that samples the similarity of each frame to a reference pose. Then I can use that to only pick frames for Motion Matching that will result in getting to a frame close to the reference pose N seconds into the future. The path to get there could be anything, but it’s essential that we get close to the base pose again after N seconds.

Or is there perhaps a better way to approach this?

we don’t have documentation related to the feature yet, but, by using UPoseSearchSchema::NumberOfPermutations > 1, in combination with the EPermutationTimeType property in different channels you could achieve something similar: the idea is that we sample every “frame of animation” NumberOfPermutations times (like intuitively creating multiple duplicated database poses, in case all the channels EPermutationTimeType properties are set to EPermutationTimeType::UseSampleTime), and by using EPermutationTimeType::UsePermutationTime or EPermutationTimeType::UseSampleToPermutationTime, we target a different frame for those channels sampling, where the time is “(as UseSampleTime) + Schema->PermutationsTimeOffset + PermutationIndex / Schema->PermutationsSampleRate”.

It’s not implemented for the PoseSearchFeatureChannel_Curve yet, but it’d just be as simple as transferring the logic associated to UPoseSearchFeatureChannel_Position::PermutationTimeType to the curve channel.

to give you an example on how this works when using UPoseSearchFeatureChannel_Position (sampling root to root bone), consider a database with an animation “long enough”, with a UAnimNotifyState_PoseSearchExcludeFromDatabase excluding ALL the frames except the first, and a schema with NumberOfPermutations=10, PermutationsSampleRate=30, PermutationsTimeOffset=0, and a UPoseSearchFeatureChannel_Position with PermutationTimeType::UseSampleToPermutationTime.

After indexing the database will contain 10 poses (1 frame that we can sample times NumberOfPermutations). The first pose position channel will have the displacement of the root from time 0 to time 0 (identity), the second pose from time 1/PermutationsSampleRate to 0 (first frame time), the third (2/PermutationsSampleRate) to 0 etc, so at runtime while composing the MM query you can select a “wanted” displacement that will translate in poses offset at different time in your animation.

using UPoseSearchFeatureChannel_PermutationTime in your schema will allow you to bias your search towards the wanted permutation time as well.

If interested to see an application, this tech is currently used in FAnimNode_BlendStack_Standalone::StitchDatabase to “replace” blends with stitches of animations in blendstack if a StitchDatabase is provided (and setup accordingly)

On a different note, if you’re looking for (future) event matching (instead of future poses), we recently introduced that feature, and can be driven via FAnimNode_MotionMatching::EventToSearch.

Hope this helps! -Sam

Thanks, Sam! I’ve got a good idea now of how this could work for us. I also have some alternate ideas about how to filter out our data a bit more so that this is less necessary now, but we’ll see how it goes.

David