The Android app doesn’t handle client disconnection perfectly, you might have to restart the app if you don’t get any incoming data.
The project requires the free ‘TCP Socket plugin’
Biggest limitation right now is the ability to convert the incoming data into blend weights. It can be fairly complex and advanced to put together a robust system to generate this data. Currently i’m simply interpolation between 0.0 and 1.0 based on the average distance vertice to vertice of a subset of vertices defined per blendshape, between the current position of a vertice and its blendshape Max position.
Unreal Engine Project to read this stream of data, generate blend weights, and hook it into LiveLink:
Hello Unreal Community,
Opening a new topic regarding my recent effort (24H literally) to get Android Face Capture going. Goal is to create an android alternative to the Apple FaceAR solution, with MetaHumans compatibility .
I am starting from scratch and everything is gonna be open source, starting with the Android App which send ArCore face data to Unreal Engine trough TCP (free TCP Socket Plugin from Unreal).
This is what i have so far (recorded galaxy s9):
quote: "wow wow wow wow wow wow"
https://www.youtube.com/watch?v=_kLu96hfot0
https://i.imgur.com/w20bTLD.png
Got the head bone movement imported, time to put the groundwork for blendshapes!
Now that the whole android part is usable enough and send the data i need i am back in known territories.
I did fix the demo project requirements:
Initial version required 4.26.2-chaos built from source, whereas it now works with 4.26 downloaded from the Epic games Launcher.
Thanks, I rebuilt with VS2019 and configured all the steps on video. It works on my side now.
I am looking forward for it be improved for more accurate results (such as blinks).
Keep up the good work.
I always wanted to figure out if it’s possible to convert face mesh landmarks to blendshapes. It’s great to see your implementation. Thank you!
In your project, you use the AndroidBlendshape assets to accomplish this, I am wondering how do you get the values in these AndroidBlendshape assets? Do you have utilities to generate these assets?
If you right click in the content browser you should see the category “FaceAndroid” which allow you to create 2 objects:
AndroidBlendShape : a blendshape location, a subset of relevant indices from the face, a position of face points when blendshape is maxed out.
AndroidBlenshapeWeights : Holds a map of blendweights computed at runtime purely based on aggregated distances from relevant face point runtime position compared to maxed out blendshape position.
Using distances is really trivial, wouldn’t be surprised if machine learning was industry standard past this point to convert points to weights. I might give it a shot later if i find relevant data.
Hi, thank you for the detailed explanation, there are two arrays in the AndroidBlendShape asset, Vertices to Control and Vertices Max Mvt, both have hundreds of elements. I think it’s quite difficult to fill these arrays by hand, so I am wondering whether you have any utilities, thank you.
On the default demo map there’s a blueprint representing the face points indices, which is actually a blueprint allowing the extraction of relevant indices:
From the blueprint viewport, go in top view/lit mode and adjust the inclusion box extent/position (the extent, not the scaling), to select the indices you want to extract (i.e points around left eyes).
Once your box is placed go into the construction script, select the blendshape you want to send the extracted indices to, toggle the safety boolean off and compile the blueprint.
If anyone is looking to implement some machine learning to the weight generation, i believe it could be a very rewarding application of it.
Could be given to students too surely.
It does not work. When I start the game, it does nothing.
Do I have to do anything other than what’s in the video and set the prot and id?
I would so love to make it work! Its is so cool.
It is imperative that you continue to work on this.
I hope the text is good, I used a translator.