So I am having audio sync issues with Metahuman characters. When doing the live link and looking at the performance in Sequencer, the audio and lips seem in sync. I have checked the project setting is set to Forced 24 FPS, the Movie render is done at 24 FPS and the EXR is reencoded for Premiere at 24 FPS.
In terms of Audio, I have simultaneously recorded the audio from the Livelink recording and captured it using a Mic and Adobe Audition.
When I attempt to marry them back up in Premiere the audio drifts out of sync from the character. Even if the first few phrases are lined up it seems to drift out of sync with the character’s lip movements (and that’s if I bring it either the Audition record OR the .WAV file rendered from UE5’s Movie render tool (Which i personally think is a horrible tool but that’s a separate topic)
Anyone else having this issue?
So the drift is continuous in one direction? i.e getting progressively more forward/behind the facial animations?
Not sure exactly what the mismatch is, whether between hardware or software. Am I safe in assuming you’re running an external DAC and some type of XLR microphone?
Worst case, there should be some function to stretch/warp the overall timing of the audio track in Premiere. If there’s a recording speed/encoding mismatch issue, the drift should be consistent, and it should be fairly simple to fix that way.
So yes, based on what I can see the drift gets farther away as the recording progresses. The Gesture screenshot left side shows the hand gesture where the phrase should be occurring.
The right side of the screenshot shows the playhead where the spoken words that should accompany that gesture are falling. (Circled in Red)
And for reference the Top audio track is the one directly output from Unreal using Livelink and the bottom Audio track is the version that was simultaneously recorded using Adobe Audition on a wired mic
I am not a huge audio engineer/person… so I am sure there is just something that I am missing… setting or something
So you’re recording audio in Audition, and then importing that recording in to Premier? Is there some sort of global Audio clock setting/BPM in either program?
The only comparison I can draw is if you were to take the vocals from a popular song and try to layer them over a different song that’s playing at a different speed; you have to manually warp at least one of the audio tracks to get them to sync. Usually it involves a rough stretch to get one of the audio samples generally aligned, and then slicing and stretching individual clips to make them fit perfectly. There may be some wonky stretching occuring either during the export or the import.
If you’re plugging your microphone directly in to a jack or USB port on your computer, that might play in to it. There can be issues with recording using an on-board audio interface, which can be compounded by concurrent use of CPU-intensive programs.
well I was doing both as a backup as the Audio seems to come from the Iphone Livelink recording and wasn;t sure about volume and quality whereas the Mic recording with Audition could be more cleaned up quality wise …
If you’re simply running Audition in the background as a second audio capture source, try only one audio capture source, or try using different capture software, such as Audacity/Ableton/etc.; Not sure about how Audition works, but Ableton/Cubase/Bitwig etc. all have excellent warp/stretch algorithms. You can then apply EQ, Reverb, Compression and limiting to the audio, as well as general mixing and fixing.
Using a good Microphone will definitely yield better results than iPhone audio capture.
As an update, I ended up enabling the Apple Pro Res plugin in UE5 in the project and then exported the actual .MOV file from Unreal. When I marry that up to the audio in post I am not having the drifting issue, it lines up right.
So… leads me to believe the issue is somewhere in the EXR export I was previously doing though I am not sure where/what it is… But the Pro Res thing has provided a workable solution in the interim.