iPhone models: Compatibility with Live Link Face and ARKit

It really doesn’t take a rocket scientist to create all of it from scrarch:

Model your head.
Rig it to bones.
Open the ARKit face sample and check each blenshape.
Pose the face via bones to match the blend shape.
Create the blend shape based on the pose.
Repeat 52 times (or however many blend shapes exist).

If the objective is to get a person’s real expression onto a model, you can go a step further and have the person act out the 52 shapes so you can more closely model to it.
In the case of advanced stuff where you also blend normals across shapes for wrinkles and such, you need to do that anyway.

I really don’t see anyone needing a plugin for it. Its grueling custom work that you will only do when clients are willing to pay you for it / on movie stuff where a budget is enough to justify it.

The 2 pipelines (bone vs shapes) are also very similar.
Bone movement is more commonly maked by recording dots on an actor’s face.
ARKit uses a similar process without dots.

Both processes produce neraly identical results - as in Curves that generate an animation.
One is based on bones, one is based on several morph targets.

Skeletal meshes and morphs have limits due to memory constraints.

X amount of bones per vertex can be computed.

Blendshapes should not (depends on engine ofc) really have the same limitation. Making almost any expression possible.

The furure is blendshapes because of the skeletal mesh / bone limits limitations.

Then again, we are also on the forum for an engine that fails to provide any quality assurance or attention to details / performance what so ever - including simple stuff like dualquat skinning.
So, i wouldn’t be one bit surprised if even the blendshape system has hard limits within the engine.
Most likely ones are going to be light driven from the new Lumen and whatever other dynamic stuff they added… but I digress…

My point is, there is no point in figuring out how to “adjust” blendshapes in blender.
Mess with the curve values if you arent happy with the animation.

Otherwise your problem is the base of the blendshape.

1 Like

Yup. It’d be cool if it has a ready plugin to import MetaHuman Face Control Rig to Blender. But there’s no kind of plugin. The system I’m using is way easier than you’ve described in your answer. Again, I have no Maya, so I fix the Audio2Face synch animations on Unreal Sequencer, baking the animations to the control rig. It’s simple. The process I’m using takes some minutes. I’ve just mentioned iPhone because making a facial mocap it’d be seconds (sorta), not minutes. I’ll stick with NVidia+MetaHuman+Unreal Sequencer until I see something faster and easier (or have money to buy an iPhone). I’ll not switch my process for something longer and more tedious to do. (I’m a solo dev, for now. Time is a luxury a solo can’t afford).

aw sorry. I have no idea what are you talking about o__O Since the beginning, I’m talking about the automatic AI Audio2Face synch as a replacement for iPhone mocap. But I’m using Unreal Sequencer to improve automatic AI animations, replacing facial mocap with AI auto synch.
For me: No bucks for an iPhone, so welcome NVidia AI.

1 Like