Problem applying Metahuman SDK lipsync in runtime

Hello everyone,
regarding the metahuman SDK available for free on the marketplace, i have some issues regarding the runtime application.
I have followed the suggested instructions:

  1. Enable the plugin
  2. generate a token
  3. Create A BP to enable the runtime functionality as suggested on : Runtime BP implementation
  4. Select the desired .wav file and the skeleton as " Face_Archetype_Skeleton" which is related to face of the metahuman.

After doing the above, the lipsync animation is not playing, the metahuman don’t show any animation, also the output log don’t show any warning/errors related, and also debugging the BP don’t give any useful information as that the code execution stops at “ATLAudio to lipsync” node.

I want to know first: why is the code not working properly? Then what fixes should i make to get things working?

Also I would like to ask about any useful documentation regarding the BP nodes of the SDK? for example : “Make DigitalHumanATLInput” node has some pins that are not very clear and also unknow values that lack any documentation.

Did you get this solved? Im facing same problem. Cant make face to move. I get this to work with other faces but no luck with metahuman…

Hi,
Well he last time i tried that was from 4 months ago, and there was no fixes yet.
I don’t know if there are any new update regarding the metahuman SDK lipsync.
If anyone tried it, please tell us.
@EURO.SM try this link: Metahuman animation

I don’t have issues with Audio To Lipsynk and this plugin. At least with the version 1.6.0 .
You need to make sure that you applied right skeleton - better to get him from your skeletal mesh.
yeah for metahumans it’s FaceArchetype.

then you need to choose right mapping for metahuman - you can choose it from plugin, or could use default Arkit mapping with a standard metahuman.
and don’t forget to enable bSetupForMetahuman flag.

also I used these docs v1.6.0 - Metahuman SDK