Preface: I am an artist transitioning to a technical artist, and do not know 100% what goes into implementing something like this.
Preface pt 2: Tim liked my Tweet about this:
The idea is to implement into the engine the ability to create dynamic facial animation from audio and training data. This white paper is two years old now and the output is much better than what I have seen coming out of tools like iClone and Faceware. I feel like with a toolset like this we will start to see much better performance Sequences and will push cinematics even further from small teams to AAA.
I would imagine this as a two-part plugin. The in Engine plugin and the data-set training tool that would be shipped in Engine Extras.
If anyone has an interest in this I would love to consult on making it happen!