Hi guys, i have a game with fat, round little pig characters. I need to animate their bodies and faces. I have my eye on this body anim back for similar shaped characters. I will rig my characters to match that skeleton.
I also need facial animations. I’m hoping to find a facial mocap app for this.
My questions:
(How) can i rig and animate my characters with the body using that animation pack, and the face using the facial mocap app? And have it all work together?
What is a good facial mocap app i can use with unreal engine?
If i use a facial mocap app, do i have to rig the facial skeleton to a skeleton that that app provides? Or how does it work?
This is the last place i can think of to look for an answer to these questions. Nobody else has had an answer. I really hope you guys can help, and i would greatly appreciate it.
We’re using bones - a rig that our 3d modeler will build from scratch to match whatever the facial app requires and to match the body of the animation pack.
Can you use two different sources of animations on one character (face & body)?
How do I know how to setup the facial skeleton so that the app is compatible with it? Id really appreciate it if you could help me solve this problem.
Yes you can, example: in the sequencer you can add a blueprint that contains the meshes.
Then you can add individually each one like in metahuman and add animations.
I’m afraid I have no android then I cannot give you more information about it…
I remember there was nothing good for free, that’s why it was less expensive the iPhone than the software haha
your character to match that of the app? And so it knows which bone is which?
Thank you for this valuable infoany of them won’t be compatible with UE?
And does the app you have give you a skeleton asset like an asset pack does? Or how do you build the rig of your character to match that of the app? And so it knows which bone is which?
Hi, i plsn to have the skeleton made to the specifications ewquired by the mocap app. So im trying to figure out what the app requires. But i will just find one and do some research on it.
Hi, is that link you shared to rokoko supposed to take me to the android app? It takes me to a store with physical mocap gear. Im really hoping this app will work because ive exhausted all other leads. There doesn’t appear to be any other facial mocap android apps in existence. This is hard to believe.
Oh wow awesome. So what do I do Download this on my pc? But how would I get it on my phone? Or is this just a pc program? I’m going to try to figure out how to download it and how it works. Thanks for this!
Ok I’m not familiar with github. Which of those things do I download? And I’m also not seeing a download button. Sorry, I don’t really use github normally. I’m looking around. Thanks!
Edit: Oh sorry, will this work for 4.27? That’s the version we’re developing the game on. If not, is there a way I can set it up for 4.27?
And this doesn’t work for Android builds does it?
Oh this is great. But does facemesh work with the github project? Sorry I’m just stupidly not fully understanding. Maybe because this is all new to me.
I’m not seeing the actual facemesh Android app. I’ll read this link but do you have a link to the actual app?
I’m just wondering how I capture or make the facial animations with that project you sent. Does it work with a specific Android app or is this something else?
There are a few outdated versions of some android development facial capture that curcled around the forum for a bit.
I think the project became a paid unreal plugin.
However the end quality is inferior to Iphone because of the true depth camera.
You have to realize that to properly animate a face you need to be able to read movement in depth (near/far) from the camera.
Since all cameras provide 2d only feedback, and to do it properly you would need at least 4 cameras (left, right, top, front) gathering enough information from just one camera requires a very specific camera capable of reading that depth.
Apple did this to allow for facial recognition, in theory - so you cannot spoof it with a picture of yourself.
The ArKit sdk levarages the camera to make depth readings and therefore animation somewhat possible (Note that it’s still an approximation).
Any other thing you find out there works only by approximation, making results much more questionable.
You can try Faceware, or, you can draw dots on your face, film footage on at least 2 sides, and animate bones to follow the dots with blender.
Both solutions will require time and understanding, and are not something anyone will just be able to wake up tomorrow and start on.