MetaHuman Animator currently has two types of input it supports. iPhone, where we use the depth information as captured from the TrueDepth camera. Stereo HMC, where we generate a calibration based on calibration footage and this then allows us to determine depth information from stereo footage performances. In both these cases depth is being utilised and so single camera systems are not supported at the moment.
Stereo HMC’s are dual camera systems and so you could build a stereo setup from any two cameras, calibrate and input into MetaHuman Animator as HMC footage. Just keep in mind that professional HMC’s are specifically designed for this purpose (e.g. syncronised cameras) and so your quality of results will vary a lot depending on your setup.
Depth sensors (e.g. Kinect) have a variety of different formats and ways of obtaining calibration information. Unfortunately this makes it tricky to offer a good way of supporting all of the available devices right now, but we are continuing to look at what devices we might be able to support. We’ve yet to see an Android phone with a suitable depth sensor for facial capture hence us currently targeting iOS only rather than Android.