We are using a 3D head/face scanner that can capture about 10 meshes per second, so we can record facial motion as a sequence of 3D scans (neutral + various expressions over time).
Our goal is to use a MetaHuman generated by the neutral pose of the person in Unreal Engine and, by only changing the facial control rig (blendshape-like parameters), match the MetaHuman’s expression to each scan mesh of the same person.
Concretely:
-
We have multiple high-quality scans of the same face with different expressions.
-
We would like to solve or fit the MetaHuman facial controls so that, for a given scan mesh, we get a set of control‑rig parameters (or blendshape weights) that matches the MetaHuman mesh as close as possible to that scan.
Is there an established workflow or tool that you would recommend for this use case?
Are there any example projects, scripts, or tutorials that specifically show how to fit MetaHuman facial controls to external meshes?
Any production-proven tips (including pitfalls around DNA files, topology requirements) would be very helpful.
Thank you!