Facial / body rig workflow

I’m wondering what other peoples workflow is for dealing with facial rigging / animation and combining that with body rig / animation? I guess for the purposes of this post the main question is to do with how to set it up before getting it into UE4.

I’m planning on getting a rig created using the Polywink service, then doing mo-cap and retargeting with Faceware. I had hoped that I could just easily combine the Polywink supplied face rig with a body rig created in Maya using the Quick Rig tool. However this is proving to be very tricky!!

If you were working on facial animations, would you do it in a separate Maya scene with just the head / face rig, and then bake out the animation and apply it to a head that is rigged to a body, but doesn’t have it’s control rig anymore?
or
Do I need to just figure how to combine the head/face-rig into my body/body-rig so I have an all signing and dancing character that can have body and facial mo-cap applied to it all in one scene?

If it’s the latter, does anyone have any advice, or can point me in the direction of any good tutorials?

I can’t find much out there, and any tutorials of facial animation always just have a separate head with no body attached.

Most of the time facial stuff is layers on top.
Even bones on top. So you can easily patch them together within the ue4 skeleton without any issues.

For my workflow I have a master mannequin/character (finger, toes, face) that was created straight from my blender plugin (bonebreaker) and rigify.
It takes a good amount of work to get things right, but after the inital skeleton is created and retargets animations it then becomes easy to layer up multiple things.
Usually I have dialogue happen as a montage playing on a “face” slot and above.

As always the most time consuming thing is the custom weightpaint job to move the mesh.

The other way to go is to add and manipulate morph targets via curves.
So you create a big set of morphs that manipulate the 32 facial muscles, the eyes, etc.
Then to animate it you play with the morph target curves that export directly as the animation, yet also allow you to code or script them individually by feeding in different curve values. I tried doing this. I stopped 20 hours and 10 muscles in. Its tiresome and largely impractical even with knowledge of anatomy. There may be “ready to go” rigs you can modify/reshape though.

The idea with the morphs is that after setup you can film your face with something like a kinect and transfer the motion of the bones to the curves without needing the extra skeletal bones (less bones in character = better performance).

It’s all fairly complex either way.
The point though is that if you set up the morphs right you can then copy the curves over to any animation.

The bone way is slightly less complex but very much the same, except for the weightpaint part that can mostly be automated by properly setting up the bones onto the mesh.
With that, you won’t have direct access to the curves so to later modify the pose you need to import a full animation that also contains the rest of skeleton.
Naturally you can do as I did and make the animations play on a specific bone slot so that you dont have to animate motion plus facial and can separate the two pipelines.

Usage wise, I would love to go the morph target way. More performant overall, doesnt need to import an animation at all.

The third way I suppose would be to go the Alembic route. I have almost no hands on experience on that, but it’s essentially like exporting a set of instructions that modify vertices and therefore the mesh in real time. If you can isolate the face and attach it to the head bone you may be able to just have whatever animation play by running different files. They play much like morph targets from what I have tested, a slider controls the frame you are on, each frame keeps track of what is where exactly like a morph does, except this does it for the whole mesh.

Anyway, hope that helps.