It’s been a few months since I was trying this, so my recollection may be spotty. I imported the mov file into sequencer as you have, but wasn’t making use of the sound. If you need it, there’s a “Media Sound” component referenced in the docs (https://docs.unrealengine.com/en-US/WorkingWithMedia/MediaFramework/HowTo/FileMediaSource/index.html) that looks to be analogous to the video texture, but for audio. If I remember correctly, Unreal doesn’t seem to make use of the timecode from the mov file for whatever reason, so I think I just found the start timecode from the mov metadata and entered that as the ‘section range start’ and ‘timecode’ on the live link sequencer track (right click track > ‘edit section’ if you’ve not found these yet). From there you have the problem that there’s a network delay between the phone you’re getting the face tracking data from and the Unreal machine you’re recording it on, so you still have to nudge the video in line manually to counter that delay. Even though the two are using the same timecode source (NTP), they’re timestamping the same ‘real world frame’ at two different points in time, hence two different timestamps for the same ‘frame’. I hope that makes sense.