Hi guys, we are slowly dipping our hands into the UE4 environment, coming from a film background. Our current project is to take a character into UE4 and animate him via XSens Mocap suite and FacewearTech facial rig.
We have the character setup as multiple meshes such as body, head, hair, eyeballs, eyebrows… and are using the Advanced Skeleton rig for body and face. It has a function to convert the body part of the rig to a UE Skeleton automatically which we got to work nicely. There is also a option to convert the face rig to a “simple rig via bones” for UE. (since FBX does not support constraint driven)
Here are my questions:
If I want to record cinematic quality face animation, but also later on be setup for tons of voice tracks in-game play mode whats the best pipeline ?
*Do I have to go alembic, so I can use blend shapes,… ?
*Do I create the simple bone rig, and then how do I drive it ?
*For example If I want to use standard animations (either from the marketplace or my Xsens) and then use a seperate FBX file to drive the bones for the dialoque and face, do these two animation files have to be combined into 1 animation first, or can they be separately attached and game logic driven onto one skeletal mesh ?
*Could I use AI to have my character run around and then attache a fbx animation to the head only?
*What is the preferred RIG ? (We are playing with Advanced Skeleton because it can convert to Motionbuilder for Mocap sessions and also hand animation in Maya)
I am a bit in limbo about the pipeline to put our cards into. Would love some advice.
PS. We will also want to do the real-time UE setup eventually by using the iKinema and FacewearTech Live plugins.
I hope I am not to confusing, please let me know if I forgot to mention anything needed.