Open sourced it all and added a trainer.
You can take iPhone data, extract it and put folder in dataset/data and it will automatically strip the audio and create a dataset.
Just ensure you calibrate your face in the iPhone LiveLink app and use the ARKit shapes (NOT Metahuman Animator).
New updated architecture is coming at the end of the month with software and plugins in the works after that - took a while but we finally have a model trainable with less than 30mins of data in a few hours on a 4090 so its usable in many situations.