Facial recognition implementation?

Hey guys, I would like to know if it’s possible to make facial recognition, and use it for facial rig? Something simple like using a webcam and some markers.
I saw FaceRig on steam for sale (https://www.facerig.com/) and thought if it would be possible to implement something like it in UE4.

If so, where should I start?

Thanks

I was going to grab Mimic Live! from Daz3D… it’ll be a pita though. You would basically need to create a phenome blendspace, and… yea. If anyone knows of a good workflow / animation tools for automating the entire process, please share! I know there is FaceShift, but there is no “Indie License” available, and at $1500/year, that is way out of my budget.

Interesting app to add to my must watch list. :wink:

A must have as far as features goes is to be able to transfer facial data to a marker set necessary to keep the asset foot print low and performance high.

Work flow wise with out unique requirements would be to transfer the motion capture to a cluster set that corespondents to the same set attached to your character model.

Once VO is captured you would then export the set to FBX and import into UE4 targeting the character that contains the cluster set and off you go. Assuming additive animation you can then add the clusters to any model and share animations between one character and another.

Logic wise, and been around for years, the magic is done based on naming conventions to be able to transfer transform data from one form of control to another or to be able to re-target if the naming is broken.

Since UE4 already supports by layer blending then one can consider it to be just another animation data set and used in any fashion as animation is used for things like aiming or other forms of input driven solutions.

Fancy way of saying it’s already implemented in UE4. :wink:

I’ll probably open a thread in a week under the “Got skills?” forum since I’ll be offering it as a service with 3 different options:

  • Lipsync only ( Raw example here ), very very cheap solution but effective.
  • Blending between Faceshift data ( given by customer or performed by myself ) and custom lipsync ( mainly because Kinect 30fps is not enough for proper lipsync )
  • Full facial mocap using Faceware technology with joint-based rig and corrective shapes

I developed a custom rig ( using blend shapes, will be further improved with joints later on ) which allows me to easily transfer blend shapes between different characters with different topology.

Facial animation is shareable between different characters since the animation is not baked into the rig, but is driven externally…is similar to the retargeting principle already existing in UE4, but instead of manually retargeting you literally apply the same animation to different character and its already done.

I’ll probably let the user create facial animations directly inside UE4 with a custom UMG, but for now I’ll be focusing on the service itself.

If you’re interested feel free to contact me for prices or questions :slight_smile:

PS: Unfortunately streaming Faceshift facial data, as far as I know, is not yet been implemented into UE4, so I can only bake the results, no realtime…I guess someone needs to figure out how to stream the data :wink: