Hello everyone!
First a little context: I am an inexperienced Unreal Engine user/game developer. The problem I am tackling comes from an academic approach - I’m writing a thesis on an animation system for sign language where I personally focus on realistic facial animation options. The animated characters eventually need to be implemented in a VR/XR environment.
MetaHumans obviously have a lot of potential in this regard. The option to use the Livelink app for cheap motion capture has been a deciding factor in researching the possibilities with MetaHumans. I am looking at options to reuse certain parts of animations to generate new animations, modularity if you will. For example, I want the mouth to say “apple” and the eyebrows to rise and show a surprised expression. Both components would come from different animations, however.
I thought to use the CSV blend shape data generated by the Livelink app to drive different curves of the animation. I have two pipelines that create an animation, both split the face into an upper and lower region. The first directly drives the curves from a data table:
The second approach works by deleting irrelevant curve data from the animations and blending the resulting animations:
In either case, the components do not blend the way I want them to - the upper and lower part of the face animate at half intensity. I have tried to use the Layered Blend per bone function node, but this has the same result as normal animation blending.
I suspect the arkit_remapping node drives the entire face. Has anyone here ever attempted something similar and found a way to modularly animate the face (of a MetaHuman)? Am I using certain functions incorrectly or is the MetaHuman face simply not suited for this kind of approach? Will I have more luck diving into the C++ code? As an alternative, I’m thinking of blending the CSV data outside of Unreal instead, but I would prefer a solution in-engine.
Again, I am very much new to Unreal Engine, so any help is kindly appreciated. Many thanks in advance.