Hello,
I’m initiating a project that utilizes Unreal Engine’s MetaHuman technology to generate characters capable of real-time speech and emotion-driven facial animations using audio files via UE5.5’s audio-to-animation tools.
Project Flow:
Step 1: Capture a voice file in real time.
Step 2: Currently, after the voice file is recorded, the Unreal Engine lip sync module fails to activate as expected, which is an issue that needs resolution.
What I’m Looking For: I need a seasoned expert in both MetaHuman and Unreal Engine who can:
- Develop the Unreal project from the ground up.
- Create the necessary assets.
- Implement lip sync and facial animation driven by audio and emotional cues.
Contact: If you’re interested in this project, please reach out at: kyungchan.jun@gmail.com