We’re looking to automate a pipeline for generating lip-synced talking head videos using MetaHuman and Audio Driven Animation in Unreal Engine 5.5. We have the MetaHumans set up, and we need to:
- Start Unreal Engine via command line
- Pass an audio file
- Randomly select from 5 moving head animations (so they don’t all look the same)
- Render the output at 256x256 resolution, all without human involvement.
Can anyone advise on the best approach to automate this? We found these resources, but need further guidance:
Thanks for any help!