Hi, it’s worth starting this discussion just by saying that historically, Unreal hasn’t had a good solution for synchronization of animation between completely independent meshes. We really just had two options: follower montages where the playback on the follower is driven from a lead montage, and forced synchronization of anim bp state - ensure the current state and sequence player was the same between anim graphs. Instead the Unreal based solution up until now has been more around having skeletons that are subsets of one another and copying transform data between the two. The classic example there is the Fortnite-style invisible base skeleton that runs locomotion, and then transforms are copied onto other skeletons that are a superset of the base. That works well for single characters, but something similar can also potentially be done with multiple characters (think a merged skeleton for horse + rider).
That’s some background, but thinking specifically about where we are at the moment, there are probably two main options to look at depending on your requirements (assuming you don’t want to go down the route of a single merged skeleton). One is a procedural solution, similar to how we layer upper body animation on top of locomotion animation generated by motion matching in Fortnite. The other is a fully animation-driven solution where you could look at leveraging the new Pose Search Interaction Assets to allow you to keep your animations synchronized.
For the procedural approach, you would have a setup where the horse contains an attachment bone, which is animated with the horse’s motion. And you would have other helper bones for different bones on the rider. You would likely have bones for the feet and the hands, maybe the head. Those helper bones are animated as part of the horse animations and then, once you attach the rider mesh, you constrain to the helper bones. Then you can layer animation on the upper body. You may run into similar problems to those that we did with layering in Fortnite where we ended up writing a custom [Copy Motion [Content removed] to reinstate some of the locomotion movement back onto the upper body. This procedural approach is likely the approach that we would go with. But there are downsides (as with the animation driven approach) - ie. what happens if you have different proportioned riders, etc.
The animation driven solution would leverage Pose Search Interaction Assets, which allow you to specify two animations per-asset that are designed to be synchronized. This is an example of one of those assets:
[Image Removed]Essentially, you have two matching animations here for the two different meshes. These animations have to be the same length. You’re then going to run the motion matching search specifically for the first role (leader in this case), so all the pose selection is based on that role’s animation data. Then you can extract the equivalent data (animation, playback position, play rate, etc) for the second role.
To do this, you need to setup your schema to include these roles. For this, you need to add the roles for your two skeletons. You want the horse to be the driver mesh in this case so it should be the first skeleton/role. Then, on each of the channels, you have to specify the role that you want the search to be done against. In your case, you want this to be driven by the horse, so all your channels would be searching against that role.
[Image Removed]Then, in your anim bp for the lead role (the horse) you can run your motion match node as normal and it will select the correct pose for the horse as normal. But it’ll also output data for the secondary role. You can extract that data as follows (the MotionMatching anim node reference requires a tag with that name to be added to your motion matching anim node).
[Image Removed]You can then take that data and feed it into a blendstack node in the rider’s graph to select the asset, playrate, playback point, etc.
The alternative setup with this is to use the Motion Match blueprint function (rather than the anim node) and then extract this data for both the lead and follower assets, and then feed that data into blend stack nodes in the different anim bps for both meshes.
The problem with this approach is that it’s very much dependent on having matching animation assets. And it’s a lot of work to generate all of the assets and keep them up to date when you make any changes. For instance, say you want a different variant of your rider animations (for injured, etc), you now have to mocap a whole set of new rider animations that match exactly with the horse animations. This is the approach that we used in the Witcher demo at UE Fest. But we wouldn’t necessarily recommend it unless you’re happy that you have the resources available to generate all of the animations (for an idea of scale, it’s likely thousands of animations, not hundreds). It’s also important to bear in mind that although this isn’t full motion matching interaction, it is still an experimental feature.
Happy to discuss this further if you want to dig into the options in more depth.