Hi Hive mind! Im trying to find a way to animated a 2D cartoon face shapes (think southpark face style) with the Apple kit (Iphone live link) and not Oculus lip sync. Im having a hard time finding relevant information and tutorials. does anyone here know of anything that could get me started?
Perhaps you can create the animations in Adobe Character Animator then import to UE as a png image sequence? This software uses mocap from a webcam so is super fast to animate characters.
Interesting idea! But i need it to be live for a live show
Adobe Character Animator can be used live so having a similar tool in Unreal would be amazing. Please post your findings here as I’d love to use 2D characters in Unreal!
Hey again. I have been thinking about this for a while. I have one idea.
If i can read the iphone blendshape values in unreal live, and base on if its 0.0 or 1 - could swap out a texture on the face. I think this could work but I need to do a bit more research in out this could work as im not really good at blue printing. Hopefully some here could lend a hand?
M
Just look into ArKit if you are going to do it with an iphone.
The data you can transmit is driven by 52 shapes.
You create an iphone app that uses arkit, and transmits the data to something else.
That someting interprets said data however you need it to.
If you don’t want to create the cusrom ArKit app, then you have to make a 2d character that uses blend shape data to animate.
Southpark animations aren’t exactly smooth, and also rather custom when it comes to different expressions.
I’m not sure where you are wanting to go overall, but there will likely be some impossible scenarios, like distinguishing angry face vs squinting.
Because of the 52 shapes, you are just going to have a hard time.
Because of mixed value data (you rarely get a full 1 on a single shape) you’ll often get just “bad” results.
You can try to use blendshapes to shift parts of the face up and down to hide/clip and present the different expressions. That too is very hit/miss without some sort of clamp range on the incoming data…
So… im going to answer my own question
After thinking about this for a while the solution is as follows.
Im using Adobe character animator to stream (face and voice captures inside Character Animaimator) the face animation live over NDI into unreal engine as a media texture. This works really well but because of NDI lag this is just done for actor reference. We are recording all the animation of the actor inside Character Animator and then exporting a PNG sequence back into unreal engine and adding it to the recorded sequence together with the mocap data. then we render the sequence out. Cant show anything sue to NDA. But this works really well in a near live post enviroment.
Hope this helps other who are looking for a 2D facial decal style animation.
Seems like an over kill. There’s apps that capture the mocap data and export it out for you.