Hi, originally I was working in Omniverse with their provided human models and their Audio2Face capability to animate the models. So, then we wanted to see if we could create a model of me and it was suggested that we use MetaHuman Creator with images of me to create a MetaHuman of me. Normaly, we would have exported that over to Omniverse where we would have then created our Text to Speech Omniverse application using the rigged 3D model of me and off we’d go with our effort.
However, after reading the license for MetaHuman Creator, it seems like the language in there does not want us to “render” the face anywhere else by Unreal Engine. Am I mistaken in the definition of “render” being used here?
We also looked at creating an Unreal Engine application instead and trying to do the Lip Syncing to voice audio there, but was surprised to find that the plug-ins weren’t as mature as Audio2Face. :-/
We also know that we can link Omniverse (and so Audio2Face in Omniverse) to Unreal Engine. But then we don’t know how you would pack up the Unreal Engine application without also having to also pack up Omniverse.
We’re open to suggestions on how we should proceed and still be within the EULA for MetaHuman Creator.