Efficient way to set face animations?

Hello everyone, I am currently building a face animation system based on the given emotion enum (For example if I select “Happy” it will retrieve the required data fields for the happy face). However, I find it difficult since it seems like Set morph targets is the only way for this to work out. I was looking into using pose assets to help simplify the work amount, but I don’t know if it is possible to use that.

Currently I have a dictionary with the following morph target names in there along with its target value. Each of these will need to be changed to satisfy the emotion requirements. I plan on using a for each loop to accomplish this but feel that this can be a very inefficient system.

Hopefully you guys can help me out. Thanks!

The pose asset will be helpfull if you want to drive mltiple morph targets ( or joints ) at the same time, since especially for facial animation, you may need to trigger multiple expressions to achieve the proper result.

So you’ll end up having a pose asset with a list of expressions, but each one can trigger different blend shapes to achieve the result you want.

If you take a look at the Metahuman pose library, you’ll see that there are multiple variation of the same expressions, based on the “strength” of that expression, and also there you have the sliders of the facial rig ( which are driving the joints ) that will enhance that specific expression.

I think that using a pose asset will be an ideal solution, not entirely sure about the performance cost though.