Hi
As you know, a MetaHuman is made up of 1)“mesh part” and 2)“texture part” covering the mesh, and to reproduce a person’s face in 3D, textures are as important as meshes.
The MetaHuman’s facial mesh can be made -somewhat to almost- similar to the original face object (scanned object, daz3d objects etc.) through the “Mesh to MetaHuman” process.
However, in the case of textures, there is no choice but to choose from the MetaHuman creator, and because of this, there are considerable limitations in reproducing the original face.
Of course, if you use Zbrush’s Z-Lab, Houdini, or Blender, you can convert the original scanned texture into a MetaHuman texture through a task that is never simple.(overlapping the MetaHuman mesh and the original mesh, and meticulously designating quite a few feature points(eyes, nose, mouth, etc.) individually…)
By the way, during the ‘Mesh to MetaHuman’ process, it is assumed that the original texture can be easily converted into the MetaHuman texture, for example, during the “MetaHuman Identity Solve” step.
But I can’t see why they ask users to choose among random textures that don’t even come close to anything similar, after discarding the original texture information which is crucial for character reproduction.
Currently, only the mesh is somewhat similar, so the character reproduction can be said to be imperfect and half-baked, but if the original texture can be applied without additional process, the potential of MetaHuman will be much greater.
Of course, the perfect level of work will still be done through separate applications(Maya, Zbrush & Zwrap, Substance painter…), but if the original texture is reflected in the “mesh to MetaHuman” process, a new era will come in terms of its usability, especially for amateurs or indie game developers and film producers.
Please let me know if there are any facts I am unaware of or if there are other better ways.
Thank you in advance.