Most of the time facial stuff is layers on top.
Even bones on top. So you can easily patch them together within the ue4 skeleton without any issues.
For my workflow I have a master mannequin/character (finger, toes, face) that was created straight from my blender plugin (bonebreaker) and rigify.
It takes a good amount of work to get things right, but after the inital skeleton is created and retargets animations it then becomes easy to layer up multiple things.
Usually I have dialogue happen as a montage playing on a “face” slot and above.
As always the most time consuming thing is the custom weightpaint job to move the mesh.
The other way to go is to add and manipulate morph targets via curves.
So you create a big set of morphs that manipulate the 32 facial muscles, the eyes, etc.
Then to animate it you play with the morph target curves that export directly as the animation, yet also allow you to code or script them individually by feeding in different curve values. I tried doing this. I stopped 20 hours and 10 muscles in. Its tiresome and largely impractical even with knowledge of anatomy. There may be “ready to go” rigs you can modify/reshape though.
The idea with the morphs is that after setup you can film your face with something like a kinect and transfer the motion of the bones to the curves without needing the extra skeletal bones (less bones in character = better performance).
It’s all fairly complex either way.
The point though is that if you set up the morphs right you can then copy the curves over to any animation.
The bone way is slightly less complex but very much the same, except for the weightpaint part that can mostly be automated by properly setting up the bones onto the mesh.
With that, you won’t have direct access to the curves so to later modify the pose you need to import a full animation that also contains the rest of skeleton.
Naturally you can do as I did and make the animations play on a specific bone slot so that you dont have to animate motion plus facial and can separate the two pipelines.
Usage wise, I would love to go the morph target way. More performant overall, doesnt need to import an animation at all.
The third way I suppose would be to go the Alembic route. I have almost no hands on experience on that, but it’s essentially like exporting a set of instructions that modify vertices and therefore the mesh in real time. If you can isolate the face and attach it to the head bone you may be able to just have whatever animation play by running different files. They play much like morph targets from what I have tested, a slider controls the frame you are on, each frame keeps track of what is where exactly like a morph does, except this does it for the whole mesh.
Anyway, hope that helps.