MetaHuman Animator Support Webcam / Android?

Greetings!

I’m in pre-development right now, and I was going to stick to 5.1, however, 5.2 has the animator and that is HUGE because it would mean that I don’t have to use a third-party application like Faceware.

My question is:

Seeing as there is support for face-capture headsets, does that also mean that there will be support for all visual capture, like HD Webcams?

2 Likes

Buump.

Seems like the new Metahuman Animator requires true depth data which is possible on iPhone & Face-capture headsets.

I am sure someone will come up with Webcam/Multicam solution to capture depth data.

1 Like

Thank you for the insight.

I’m hoping the animator is a bit more flexible than you suggest. Epic seems to be (finally) going for a holistic approach to media development by acquiring and consolidating Sketchfab, Unreal Marketplace, ArtStation, and Quixel into FAB, so I’m hoping they’ll either integrate, directly, a webcam option OR release an in-house plugin.

Accessibility is their friend.

If there is anyone at Epic who sees this, I’d love your input.

1 Like

I tried using an application (MeFaMo) for this purpose, but the results were not satisfactory. Perhaps upgrading to a better webcam with higher resolution than my current 720P HD webcam could improve the performance. While the application can work with MetaHuman characters directly, it still struggles to establish natural dialogue and create accurate facial expressions. Nonetheless, I recommend checking it out and supporting the developers. However, based on my research, the most effective solution would be to use an iPhone or a dedicated face capture device supported with AI.

GitHub link Release v0.1 - Initial Release · JimWest/MeFaMo · GitHub

1 Like

Greetings!

I use Faceware, which does give very good results. BUUUTTT I do have to pay for it.
I absolutely refuse to get an iPhone simply to use MOCAP when I have an already great Google Phone.

1 Like

The MHA announcement states that any vertical stereo HMC could be used. As these do not capture depth data, you could infer from this that basically any vertically mounted cameras (including webcams) could potentially be used. Stereo is required to generate a depth map and as long as the footage is synched (either hardware or software based, for example, synched in Davinci) that is all that should be required

4 Likes

Please don’t play with my emotions. My heart can’t take it if this is (not) true!

Apologies for the delayed response. I haven’t been active on the forum for a while.

I agree with you; considering other options is essential. I’ve watched some YouTube videos showcasing the impressive results of Faceware, and it’s definitely worth a try.

Unfortunately, due to the exchange rate difference in my country, I currently don’t have the opportunity to try any systems or products. As a result, I have to continue working on character animations that don’t involve dialogue for now, focusing on various movements from a distance.

I hope that one day I’ll have the chance to experience these technologies.

1 Like

What about the support of other depth sensors like Kinect?

2 Likes

That sucks that, based on location, your access to certain technologies is limited.

Again, here’s hoping that they have “in-house” solutions!

OR

That we’re lucky enough to be able to use any video capture device.

I reached out to the team various times, on various platforms, and still no response.

Yes, it definitely sucks. I’m unable to work with Nanite because my GPU is an older AMD model that doesn’t support SMA6. Raytracing remains a distant dream for me. As a workaround, I’m creating all levels, animations, and other elements in Unlit mode, dividing the scene into levels and layers. It becomes a guessing game of whether it will work or not. And the funny part is that rendering a one-minute short cinematic takes around six hours.

2 Likes

Now that the Animator is out, anybody seeing this, let me know your findings!

1 Like

Ummm, there are already a lot of cameras out there that have true depth data, like the ir/webcam ones for the MS phone 8 (yes all the way back then), the kinect 2, and numerous ones from logitech etc. for android and windows.

The question is whether these guys have locked themselves in to a proprietary format by Apple.

2 Likes

Well, that’s what I’m wondering. Still no official answer.

But the issue is that (unless I’m misinformed) there are stereo HMCs that do not require such depth data.

Which would mean that is not a requirement.

I suspect they WANT us to use the iPhone, and it’s totally possible not to (by using a Pixel phone or HD Webcam), but will not openly endorse this for… reasons.

“Use as intended. Don’t get cute.”

Again, I’ll wait for someone in the forums to attempt this.

time for another bump. someone tell me there is a way to get decent face motion capture into Unreal Engine without needing an iphone. I can’t find anything that works. DSLR camera, webcam, or android. iPhone has total monopoly on this feature and it absolutely sucks. Has anyone got this working for face motion capture without an iPhone?

1 Like

Well, you certainly answered my question, in part!

1 Like

I’ve been looking. I managed to get okay body mocap using free version of Rokoko and my webcam, but an easy face mocap solution still evades me and I refuse to buy an Apple product.

here’s what my research has found but I havent had time to test them. I need something really simple but I might have a pick the best of them and try it soon if no one out there comes up with an answer. I honestly cannot believe this is monopolised by Apple and no one has solved it…

MeFaMo - has been mentioned.
FreeMoCap - not sure it is integrated to UE.
Rokoko - yea, if you buy equipment but works for body with webcam and I tested it, not face.
MoveAI - iphone only
MoCapForAll - I downloaded the free version on steam but havent tried it. Thihnk it needs calibrating and multi cameras or something.
ThreeDPostTracker - this looks interesting but is no longer supported and only been used by anime types, so I dont expect it will have nuances needed for Metahuman realism of face movements.

Anything else is expensive. If anyone knows of free software that can achieve this without using iPhone, please point me at it.

1 Like

MetaHuman Animator currently has two types of input it supports. iPhone, where we use the depth information as captured from the TrueDepth camera. Stereo HMC, where we generate a calibration based on calibration footage and this then allows us to determine depth information from stereo footage performances. In both these cases depth is being utilised and so single camera systems are not supported at the moment.

Stereo HMC’s are dual camera systems and so you could build a stereo setup from any two cameras, calibrate and input into MetaHuman Animator as HMC footage. Just keep in mind that professional HMC’s are specifically designed for this purpose (e.g. syncronised cameras) and so your quality of results will vary a lot depending on your setup.

Depth sensors (e.g. Kinect) have a variety of different formats and ways of obtaining calibration information. Unfortunately this makes it tricky to offer a good way of supporting all of the available devices right now, but we are continuing to look at what devices we might be able to support. We’ve yet to see an Android phone with a suitable depth sensor for facial capture hence us currently targeting iOS only rather than Android.

7 Likes

FINALLY an OFFICIAL response!

Thank you, Colin, for the insight. Hopefully an inhouse solution to allow non-Apple/HMC users will become available. If not, Faceware to the rescue.

2 Likes