Starting recording on Live Link Face app doesn't start Take Recorder recording

My coworkers and I are trying to set up a character utilising the Live Link Face app on iPhone X for facial animation. We’re currently setting up an animation blueprint to match up the actor and character movements. This documentation page says that:

If you are connected through Live Link to any instances of Unreal Engine at the time you start recording from the Live Link Face app or through the OSC interface, you will also launch the Take Recorder on all connected instances of Unreal Engine. Your animation performances will be recorded both on your iPhone and in the Take Recorder on your computer.

This suggests that when we hit record on the app, the take recorder should open and start recording at the same time, however this doesn’t happen. I’m working on the animation blueprint remotely, so I guess I’d need the live link data in the Take Recorder slate and the the video from the app to match up nice and tight for this to work. We’ve Googled and watched various videos about setting up the app etc, but so far nothing has done the trick. It also doesn’t seem to work in the Face AR Demo that Epic provides, so I’m betting we didn’t just misconfigure it. Are we missing something? Thanks.

I have the same problem here, take recorder record nothing when I finish recording, has anyone give some tips? Or is it a bug or something?

I think I finally find out what’s going on, I change the timecode from system to NTP on the app, then take recorder finally records something, hope this could help you!!

Nice. Our original problem was actually about lining up multiple sources (actor video and Live Link Face data), but after rigging the character to use ARKit blend shapes the actor video wasn’t necessary any more, so we no longer needed a workaround.

We did however recently have problems with Live Link tracks recording with no data points. This time it was from a Rokoko suit, which doesn’t appear to have a timecode setting in the Studio software (that I could find), however unticking ‘Use Source Timecode’ in Take Recorder settings did the trick. Guess if your source doesn’t output a compatible timecode provider, Take Recorder will just try to stuff the incoming data into 0:00 or just drop it altogether. Still, failing silenty isn’t good UX, so I’m going to look into getting a bug filed for this. Thanks for the heads up!

This does raise further questions about what to do about synchronization with any data that doesn’t go through Rokoko studio, but I guess we’ll have to ask them.

Have you ever found a solutiuon for this?

Unfortunately, no. The best I could do was record the Live Link Face data, then add the actor video as a media source in the same sequence, and try to align them by eye. If you can think of something that would act as a visual ‘clapperboard’, it might help your chances, as I found this pretty tricky to line up based on gestures alone.

I did see in another thread a while back that suggested the Unreal team were working on a way to import the (timecoded at source) CSV, which I think would eliminate this problem, however I don’t know if any progress has been made. I did succeed in getting the CSV in as a data table asset (albeit without an appropriate field type for the timecode), however I couldn’t see any way of putting together the appropriate sequencer track from this without delving into some C++, which I wasn’t prepared to do for this. I’m a Clojure guy, so I die a little inside every time I can’t just pipe something through a few data transforms and have it just work. Sigh. Maybe you’re more C+±savvy.

So sad it doesn’t work still. Let’s upvote the issue here: Unreal Engine Issues and Bug Tracker (UE-96206) btw someone knows what Backlogged issue status means?

Nox, do you import the mov file generated by the app? I can’t get the audio play when I drop it as media track to sequencer. In media player it has sound when I play, but not in sequencer. Also the mov file comes with the timecode (synced by NTP), checked in editing software. Is there a way to have this info in sequencer for helping to align the track?

Couldn’t agree more. This would also help making the live link facial mocap workflow non-destructive, since you could calibrate/cleanup mocap data while still preserving the raw recording data.

It’s been a few months since I was trying this, so my recollection may be spotty. I imported the mov file into sequencer as you have, but wasn’t making use of the sound. If you need it, there’s a “Media Sound” component referenced in the docs (https://docs.unrealengine.com/en-US/WorkingWithMedia/MediaFramework/HowTo/FileMediaSource/index.html) that looks to be analogous to the video texture, but for audio. If I remember correctly, Unreal doesn’t seem to make use of the timecode from the mov file for whatever reason, so I think I just found the start timecode from the mov metadata and entered that as the ‘section range start’ and ‘timecode’ on the live link sequencer track (right click track > ‘edit section’ if you’ve not found these yet). From there you have the problem that there’s a network delay between the phone you’re getting the face tracking data from and the Unreal machine you’re recording it on, so you still have to nudge the video in line manually to counter that delay. Even though the two are using the same timecode source (NTP), they’re timestamping the same ‘real world frame’ at two different points in time, hence two different timestamps for the same ‘frame’. I hope that makes sense.

Upvoted. There’s a developer explaining what ‘backlogged’ means here: Bugs: Back Logged vs Fixing - #6 by NickDarnell

The gist seems to be that it’s not a priority, and indeed I don’t think this fix would solve the issue caused by network delay, so what we could really do with is a way to import that csv file that comes with the mov video as a live link track in sequencer. That way, the timecodes definitely match and no manual track aligning is required. Bonus points for an option to respect the mov file timecode on import.

Good thinking. That aspect hadn’t occur to me. I’d simply been keeping a copy of the original sequencer track uasset, but I’d rather have it as plain old (portable/generic) data than tied into UE’s asset management/reference system which I don’t think I’ve quite grokked yet.