Currently I am in the process of making a markerless facial animation capture tool for one of my projects which I intend on releasing as open-source for the community once it is at a usable stage.
One question I had for folks familiar with facial animation is how do you all manage to use both face joints/bones and blend shapes/morph targets in unison together?
Looking at the Face AR example (https://docs.unrealengine.com/en-us/Platforms/AR/HandheldAR/FaceARSample), they mention using corrective blend shapes but I am not sure exactly how to go about applying them. Do you combine both to create “poses” and use that on top of morph targets?
So far, I have mostly focused on interpolating blend shapes/morph targets on a 0-1 scale, but I figure I should use joint-driven animation to give me greater control.
Any insights into your own process of how you manage with facial animations will be helpful. I am using a Daz3D Genesis 3 character if that helps.