What it needs movign forward is machine learning for the weights generation, that’s a work i haven’t started.
You need an android smartphone Arcore compatible on the same wifi network than your PC, and you need the free TCP plugin.
Start the app, check that it detects your face (should feature dots), start the demo project and setup the virtual android phone, you should be good to go
Now it works!!! I started the game first and then the app and it worked.
But the facial expressions still don’t track well - something is done in the video that I don’t understand (sometimes a function is activated, sometimes not) - can I improve the quality with this?
yes, the way it’s working right now, you can’t use the whole 50 facial expression at the same time. It needs machine learning.
Right now you need to activate the blendshape you want to use (like i do in the video).
You make a reference face, select reference face on the blueprint and toggle the boolean to record it (it should toggle on->off instantly indicating it was recorded).
You then select a face blendshape you want to activate, make the face yourself and toggle the boolean to record the face.
Behind the scene it will compare a subset of vertices and generate a blend weight (0…1) for this given blendshape. You can also adjust the bias/scale to make the blendshape weight transition more aggressive.
You might have to restart the app if it’s not detected, the client deconnection code wasn’t fully coded.
Hello, I have tried what you told me but it still doesn’t work properly. (Is that normal? I have the same problem as this guy FaceCaptureTest - YouTube )
Have I understood it correctly?
You activate all facial expressions
Activate “Select reference location”.
Make a reference face Close your mouth at jawOpen
Then click on “Save current Blendshape”.
Then click again on “Save current Blendshape” and open the mouth.
Thank you for your patience .
Translated with DeepL Translate: The world's most accurate translator (free version)
I realize the whlole setup isn t user friendly at all. Hopefully i ll find the time to improve on it.
You should not activate all facial expression at the same time.
The weight computation is not sophisticated enough to handle all facial blendshape at the same time.
Thanks for releasing this. I got it working after messing with it a bit.
I am not sure if this is a Google ARcore problem, but it doesn’t seem to track blinking. Because of that, it also seems to float when you do blink, but I was really impressed with how accurate it was.
Additionally, my phone (Xiaomi Mi 8) would have issues once the screen fell asleep.
Otherwise, thanks for releasing this to play with. I can’t find anything Android side that is remotely in this price-range =)
ArCore doesn’t detect blinking sadly. Best i’ve been thinking of was adding some procedural blinking.
I cant found the androidlivelinksubject at the first step,there is only one livelinkAnimationVirtualsubject.What should I do to get the same result as you?
I have tried for days now, and just can’t get the metahuman to move at all.
I can see the movement fine on phone and in UE4, but just can’t link it at all.
Any chance you could do a tutorial with speech and step by step.
Looks brilliant though, just only if I could get it to work
Sometimes you have to select the livelink source back and forth before the metahuman moves.
I’m afraid i don’t have time to provide support for it and it has to be used to expanded upon as is.
There’s a layer of machine learning needed for proper generation of blendshapes weights which hasn’t been done, in its latest version i was only able to enable a couple blenshape at once for the various weights not to overlap each other.
Thank you for releasing this! I almost have it working. I am able to see the runtime points visualization, but so far like @VertexDesign, I have not been able to figure how to link it to the metahuman. Which blueprint / variable do I edit to get the metahuman to follow the Android stream instead of the apple arkit stream? I’m currently using ue 4.27.2. Has anyone had anyone else success with this on 4.27, or do i need to run this on 4.26?
I was able to get it working on UE4.26.
Now I am trying to get it working on UE5.0EA2. I was able to compile the tcpsocket plugin for UE5.0, but so far, I have not had any success getting the Android phone to connect. I do not see the runtime points visualisation. I am also getting the error if I enable the log output in the tcp socket plugin:
Couldn't connect to server. TcpSocketConnection.cpp: line 410
Hi same problem here, I can see thr dot face moving but not the metahuman . help !
Haven’t checked this project in a while as i ended up buying an iphone… <.<
You need to enable blendshapes you intend to use as the blend weigt generation is trivial and do not allow all blendshapes to be enabled at once.
To be done properly it would requires extra work and a neural network based classification system.
Hi @MaximeDupart, Thank you for the response. I have made some progress. I was able to get the AndroidFaceCapture app to work with UE5 EA2. The data coming in from the android phone was animating the metahuman face.
However, after updating UE to 5.0 Preview 1 or Preview 2, I am no longer able to drive the metahuman face with the Android App. I do still see the runtime visualization points though, and I see the numbers updating under the blendmap. Livelink is reporting the error:
Can't evaluate frame for 'AndroidSmartphone'. No data was available.
I’m thinking that the issue might be related to some warnings that were reported during build:
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(101): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(102): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(143): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(145): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(148): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(149): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
...\Source\AndroidFaceCapture\Private\LiveLink\VirtualAndroidARLiveLinkSubject.cpp(209): warning C4996: 'ULiveLinkVirtualSubject::FrameSnapshot': VirtualSubject FrameSnapshot is now private to have thread safe accesses. Please use UpdateStaticDataSnapshot or UpdateFrameDataSnapshot to update its value Please update your code to the new API before upgrading to the next release, otherwise your project will no longer compile.
I know you have not worked on this code in a while, but I’m sure you know it much better than I do! Do you think those warnings are related to why Livelink cannot evaluate the frames?
Any tips on ways to modify the code would be much appreciated.
Thanks!
interesting, i’ll have to look it back on 5.0.
Now tha ti have an iphone i wonder if i can setup the minimal machine learning steps missing.
Hi Maxime,
We were able to make the changes to make it compatible with UE5. We also converted it into a plugin so that it’s easier to integrate into our app.
Now we’re trying to get the facial animation capture closer to the iPhone.
I can push those changes to the GitHub repo and make a pull request if you want.
I started updating some of the facial targets, and found some strange behavior. JawOpen, mouthClose, mouthFunnel, and browInnerup were already working for the most part. Then I first updated the reference face which helped close the mouth on the base pose. Updating jawForward, jawLeft and jawRight, seemed to work ok, but when I added mouthPucker, things started to get weird. After that whenever I opened my jaw with my mouth open, the character’s mouth closed. I’m thinking I need to verify that all the livelink signals are wired up correctly before changing any of the facial target data. @MaximeDupart , can you give me any advice on where to look to verify that each data curve coming out of BP_BlendshapesHandler is driving the correct target?