I am leading a machinima project in UE (started on 4.26 and currently exploring possible migration to 5.4)
We have a moderately sizable list of character models that have been ported from the source engine and optimised for UE (re-rigged to fit) and others have been developed from Blender. By all accounts, all of our models should function the same way. My apologies on the lack of developer detail, I was not the one behind all the hard work.
What I am trying to research are the means of bringing these characters to life for the films we seek to make in UE, specifically things like Lip syncing and motion animations. For now I seek to focus on lip syncing/facial tracking. Having made exhausting efforts to find someone with a tutorial resembling our situation in mind, it seems like Metahumans are the only answer anyone wants to give us.
What frustrates me about this is that I don’t know what to make of that and now I worry that we’ve been backed into a corner with all these character models having gone through Blender only to find we can’t utilise them properly.
So I ask anyone with the kind of knowledge that could recommend a sensible direction to consider; What options are there? I hope there is one.