Is it possible to use facial mocap data in MetaHuman face rig?

The live-sync of iOS app (and android developed by user) is really amazing.
However, I am wondering whether using ‘recorded’ face-mocap data with Metahuman.
If it’s possible, which format or data can be used?