iPhone models: Compatibility with Live Link Face and ARKit

Hey, guys! I have to buy an iPhone for facial mocap. I’ve heard some devs have been facing problems with iPhone 8 and data transfer to MetaHuman. So…

Once for all, what are the iPhone models compatible with Live Link Face and ARKit?

(P.S.: If there’s an alternative for iPhone, I’ll be all ears.).

You need an iphone X


An iPhone XR doesn’t do the trick, does it?

An iPhone XR doesn’t do the trick, does it?

Yes it does. All models post 2017 (Iphone X and beyond, with the exception of I think iphone SEs) include True Depth cameras (i.e. the device which allows depth estimation of the captured scene. It is mandatory for Apple FaceID or Unreal IOS facial mocap).
So iPhone X, Xs, Xmax, XR, 11/11 Pro, 12/12 pro, 13/13 pro should all support this. And in particular the XR (I have this model, Unreal live link face works well). I guess that recent iPads are also supporting this feature. Cheers.

Oh! Thank you for confirming it!

that means a $200 second hand x will work to get up and running
I needed to know that too.
saves me $200+ for startup cost

Its a lot better if you get higher mega pixel count than X offers (probably with a 12? Look into front facing camera specs before buying.)

All in all, results when compared to classic capture are very “mhe”.

Then again, you don’t have to draw dots on people’s faces or spend 2 hours setting up, so thats your give/take.

Doesnt do beards well though, whereas you can put dots on someone’s beard and mocap them with somewhat decent accuracy either way.

I have simple face anim needs
if it gets 90% synced I can fix the rest

Therefore, NVidia’s Audio2Face/Audio2Emotion is the greatest tool for you. It is a monster! It can sync with any sound, and it can now control emotion and blinking. I can’t fix facial animations since I don’t have access to Maya. There is no facial control rig import option for Blender in MetaHuman, which I use.

I’m pretty sure audio2face uses rtx ai
I have a gtx 1060 so it won’t work with that card I don’t think.
I wish they would give us better pipeline tools for other dcc’s like blender, very few independents can afford 3k per year for maya

Yes, it does. I’m using it because still have no iPhone. Too expensive and I have no balls to buy a used one (don’t trust if it’ll last).

No, it won’t work with a GTX 1060. It was my old graphics card too. RTX is cheaper than ever. Even in a third world country, you can buy it for less than $500. Mine is a RTX 3050 EX and Audio2Face works very well with it.

EXACTLY my thoughts. Why doesn’t exist a MetaHuman control face rig import option for Blender? I don’t know how hard is to create a python connection for this as they have for Maya, but it’s tough not having this one for Blender.

Where I live in Van its about $200 for an iphone XS 64 and about $400 for a good qual iphone 11 second hand
that’s not too bad
even tho apple sux monkey balls
but it works so that’s not bad
an RTX is still over $600 second hand here
I may not dig iphone or apple, but I am a use what works and keep it simple and stupid kind of tech
we really need a metahuman rig setup in blender
they should also port the other maya tools to blender
nobody can pay 3k per year
I remember when maya was 1k for life, shoulda bought it.

I agree
epic is focused on maya as their pipe line
but 90% of us indies are using Blender
and ya it isn’t maya
but I remember povray, and lightwave on an amiga
blender is a swiss army knife
and the texture export is so gnarly
but as for rigging and shape/phoneme tracking
I should be able to
use an android or pc device with visualinput
but ARKIT tech uses the lidar as a synchronous depth stream
so unless a major android supplier gifts us with a lidar on their phones we will haveto
which is a basic iphone x +

If I was a programmer, I’d create a python connection between MetaHuman Face rig Control and Blender… But I’m an artist T__T

Good old times.

I saw some indie devs creating stuff for Android, but they haven’t opened anything for the public.

Too expensive here. My RTX was cheaper than an iPhone. Now, I’m using NVidia Audio2Face with Audio2Emotion and it gave great results with fast MetaHuman face animation integration. Still, I want to buy an iPhone, when prices go down o__o

absolute cheapest rtx in my hood is $500 rarely usually $650

1 Like

Blender’s built in rigs use bones. MetaHumans and apple AR use blend shapes.

The 2 systems aren’t even remotely compatible or comparable.
One gives you nearly complete freedom to distort however you feel like, the other is limited to the values you bake into morphs/shape keys.

To even being to pass data from one to the other, you’d have to treat it as face capture all over again:
Set a specific vertex on the morph target animated face that the specific bone of the face rig will follow along.
Youd do all of this for no reason at all.
Since you end up with an animated skeletal mesh that contains a lot of bones when none are needed to achieve the same result, you just wasted your time making something that could work momentarily, but wont ever work in the long run.

There are plenty of alternatives that still use morph targets/faceAR data but don’t rely on a skeletal mesh. Try Faceware for one.

Or, you can just use the skeletal mesh like we did 10 years ago. You wont be able to get 100 characters to all have different expressions as they walk around on your end user’s systems. Its a waste of time towards future development, but it does still work…

1 Like

Thanks! (yep, I’m from the old days when everything was done with skeletal meshes and morph combined in a freak show way. I’ve been sticking around since UDK).
Right now, I’m using RTX + NVidia Audio2Face + MetaHuman + control rig baked keyframes Sequencer for fine-tuning. That’s my workflow, so I don’t need to rely on Blender to tweak facial animation. An iPhone would be faster than everything I’ve mentioned, but hey, we do what we can with the resources that we have.

I use blender as an intermediary for zbrush blendshapes all the time
if you take an MB-Labs addon character and export them you will see many blendshapes for phonemes and facial expressions
bones are used for the bodies and shapes for the head
blender can do it fine there just isn’t money for a plugin creator

You can do this automatically on NVidia Audio2Face (https://www.youtube.com/watch?v=Fh0qpSA4TOw) :smiley: I used to Zbrush, but as a solo dev, I needed to speed up my process. That’s the reason I use MetaHuman. But you can generate auto Blendshapes for your custom head on NVidia Audio2Face as well. And this works on Blender as well. It can save you a ton of time.