There are a few outdated versions of some android development facial capture that curcled around the forum for a bit.
I think the project became a paid unreal plugin.
However the end quality is inferior to Iphone because of the true depth camera.
You have to realize that to properly animate a face you need to be able to read movement in depth (near/far) from the camera.
Since all cameras provide 2d only feedback, and to do it properly you would need at least 4 cameras (left, right, top, front) gathering enough information from just one camera requires a very specific camera capable of reading that depth.
Apple did this to allow for facial recognition, in theory - so you cannot spoof it with a picture of yourself.
The ArKit sdk levarages the camera to make depth readings and therefore animation somewhat possible (Note that it’s still an approximation).
Any other thing you find out there works only by approximation, making results much more questionable.
You can try Faceware, or, you can draw dots on your face, film footage on at least 2 sides, and animate bones to follow the dots with blender.
Both solutions will require time and understanding, and are not something anyone will just be able to wake up tomorrow and start on.