Hello everyone. I have some problems with the synch of audio and facial animation in sequencer while gameplay.
-
When I stream A2F in Unreal 5.3. everything is fine and in sync. But as soon as I export the animation from A2F and import it as usd. into Unreal and then play it in the sequencer with the audio track, the animation and audio start to diverge more and more. Frame rates between sequencer and exported animation match.
-
I have got the same prolem with animation generated with metahuman animator.
In my usecase it is not for a render pipeline but for animated cutscenes in a game.
Anyone else having similar problems? How to solve this?