I use Flask thru Python to code the webhook and I run it locally in my machine, in order to communicate with the internet and reach the fulfillment in Dialogflow I use ngrok. Within Unreal to process the answers I use a plugin “Fetch - A Simple HTTP Client” and for text to speech “AWS Polly”.
got it. so in order to run metahumans, we have to also use th4e unreal engine with webhooks correct?
they cannot be driven standalone via python?
I think we are starting to get the gist of this. so we set up metahuman connected via blueprint to epic engine. epic engine then makes webhook calls to external APIs for things like text to speech and speech to text. is this the idea?
This is looking promising…
(probably?) could be combined with a text-to-speech engine to be a text-to-animation pipeline:
yes, but how? we are trying to map it out.
Hmmm… I’m having a real problem here, I thought I’d ask if one of you guys can help. You’re all infinitely more knowledgeable about this stuff than me, so here goes.
I’ve got the Nvidia Audio2Face to Work on UE4, but now I’m trying to import it into UE5 (yes, I know it’s early access). Because great lighting, it works better on my machine and I prefer the interface. Can’t get it to work though. UE5 will not recognize the .uasset files, no matter what I do.
Even if I create and import the animation onto a Metahuman in UE4, and even include it into a level sequence, and then copy the whole project with UE5 to make it compatible, it doesn’t work. The sequence and animation both disappear from the content browser when I open it in UE 5.
Any ideas?
Btw, I’m doing this for fun/a university presentation on tuesday, I’m not really a game designer or programmer. Not trying to profit from it. Otherwise of course I’d put in the hard work and learn this ■■■■ from the ground up, like you guys have obviously done.
Following on from my previous post, I created a blueprint to import exported Live Face App csv files into UE. The following example uses the Live Face data for head, brow and eyes movement and combines with my English word to mouth animation Blueprint, Realitalk.
Hey everyone…
I have a question regarding using lip sync for metahuman ( Metahuman SDK) while animating the body of the metahuman with custom animation ( preferably an idle animation, where he have a slight movement in place), so when i tried that, the metahuman head , given the lip sync animation generated from the SDK, won’t sync with the body having the custom animation, and that will make the head seperated from the body obviously.
So my question is on how to make the head sync with the body while the head is playing the animation generated for the lip sync and the body being animated with the custom animation.
If I am interpreting this issue correctly, in the AnimGraph, in the Animation Blueprint for the MetaHuman “Face”, try adding in a “Copy Pose From Mesh” node to copy the “Parent” body animation, then input that into a “Layered blend per bone” node, with the other input being the other head animation you currently have.
Thank you sir very much.
It did really work.
I just have to refine it so the head don’t go out of the body.
While looking at the Face_AnimBP of the metahuman, i found out that those nodes do exist but they are connected with different nodes (see pic), so my question is why those nodes don’t have that effect of syncing the body with the head?
Edit:
Editing to mark that the syncing is accomplished without connecting the output node to the layered blend per bone node, because if i connect it the animation is not playing right by syncing the head with the body, in another word the head will be floating outside the body, is this correct?
What i did is just disconnect the output pose node.
I have a hunch. Try to set translation retargeting option to “skeleton” for both body and face meshes (except root and pelvis of course, as usual).
Ok thank you. Will try this of course.
live link with an iphone is probably the fastest
to make facial anims from voice actor recordings you can also use Nvidia’s Audio2face
it requires an rtx card to run, but I have read a couple of good comments on using it with metahumans
https://docs.omniverse.nvidia.com/app_audio2face/app_audio2face/overview.html
Hello everyone, I am having exactly the same problem explained here:
I tried to apply the described solution but still experimenting the same issue (desincronization between face animation and body animation).
I think is because when setting a lipsinc anim at runtime, the code changes the face from “Use animation blueprint” (when blending logic is implemented), to “use animation asset”.
Thanks in advance.
Over a year later and I see almost nothing ever written about SDK. I’ll be testing it tomorrow in Unreal 5.1
Almost all tutorials about speech to animation are in UE4, so I’ve been struggling to get anything working. I had Faceware up and running, but my trial expired.
Tested it.
SDK works in 5.1
It isn’t perfect, but it saves a lot of animation time. Cleaning up metahuman animation by baking to rig after should help make it smooth.
But it only worked after following this tutorial. SDK must be manually placed in your project as demonstrated below.
The only difference for me. Instead of choosing the emapping option, you have to choose metahuman in 5.1
How it works? I use UE5, the metahuman face animate bluprint already has the “Copy pose From Mesh” and “Layered blend per bone”. But when use metahuman SDK to lip sync, use ”play anim“ node, the head still seperated from body(body also have anima). can you share your unreal project?
exactly the same thing happens to me, any solution?
Any information about metahuman SDK for UE 5.2?