we don’t have documentation related to the feature yet, but, by using UPoseSearchSchema::NumberOfPermutations > 1, in combination with the EPermutationTimeType property in different channels you could achieve something similar: the idea is that we sample every “frame of animation” NumberOfPermutations times (like intuitively creating multiple duplicated database poses, in case all the channels EPermutationTimeType properties are set to EPermutationTimeType::UseSampleTime), and by using EPermutationTimeType::UsePermutationTime or EPermutationTimeType::UseSampleToPermutationTime, we target a different frame for those channels sampling, where the time is “(as UseSampleTime) + Schema->PermutationsTimeOffset + PermutationIndex / Schema->PermutationsSampleRate”.
It’s not implemented for the PoseSearchFeatureChannel_Curve yet, but it’d just be as simple as transferring the logic associated to UPoseSearchFeatureChannel_Position::PermutationTimeType to the curve channel.
to give you an example on how this works when using UPoseSearchFeatureChannel_Position (sampling root to root bone), consider a database with an animation “long enough”, with a UAnimNotifyState_PoseSearchExcludeFromDatabase excluding ALL the frames except the first, and a schema with NumberOfPermutations=10, PermutationsSampleRate=30, PermutationsTimeOffset=0, and a UPoseSearchFeatureChannel_Position with PermutationTimeType::UseSampleToPermutationTime.
After indexing the database will contain 10 poses (1 frame that we can sample times NumberOfPermutations). The first pose position channel will have the displacement of the root from time 0 to time 0 (identity), the second pose from time 1/PermutationsSampleRate to 0 (first frame time), the third (2/PermutationsSampleRate) to 0 etc, so at runtime while composing the MM query you can select a “wanted” displacement that will translate in poses offset at different time in your animation.
using UPoseSearchFeatureChannel_PermutationTime in your schema will allow you to bias your search towards the wanted permutation time as well.
If interested to see an application, this tech is currently used in FAnimNode_BlendStack_Standalone::StitchDatabase to “replace” blends with stitches of animations in blendstack if a StitchDatabase is provided (and setup accordingly)
On a different note, if you’re looking for (future) event matching (instead of future poses), we recently introduced that feature, and can be driven via FAnimNode_MotionMatching::EventToSearch.
Hope this helps! -Sam