[Question] Any lipsync feature ( ala Source2 ) planned in the near future?
[Question] Would you be able to isolate ( from an image sequence or streaming webcam ) markers locations ( green or red ) on the face ( using multiple raycasting/blueprint functions ), average the pixels location, put an empty actor as the average location, get XY movemet, and then transfer the motions to the morphs?
Alternatively to manipulate the image ( getting alphas from the color channels ) so that UE4 is able to recognize the shapes ( contour of the mouth/eyes, similar to the tracking techonology Faceware is using ), and get tracking data from them?