Download

Facial Animation Audio Dialog

Where do we get the software for making facial animation with audio dialog?

FaceFX o good old Softimage Facerobot :wink:

Hmm it sounds pretty technical to me, I need it explained to me in simple laymen english. But I find it interesting… There’s no download link for this face system as far as I am aware of.

  1. Does this lip sync system only work with models loaded into Maya?
  2. Do the models have to be all hard coded first in Maya with this cube?
  3. Does the model in Unreal Engine need to have a facial rig in order to use this cube system?
  4. I don’t have Maya. Only got Blender.
  5. How does this work with assigning the actual Dialog Text to the facial animation? As I have to read all my dialog text in from a data table using an array where my dialog is already structured in words sentences by the row…

58,000 words is alot, is this word dictionary for this system customizable (can we add new words to the dictionary,
to customize our dialog?)

Or does this sytem only work by picking the words out from the dictionary one by one individually to form the paragraphs and sentences
of our dialog, is this how this system works? If it works that way then that’s alot of blueprint coding for each of those words, or can we
just read in the words from an array data text table to read in structured words by the row and have the blueprint lip sync those words
from the array using a data text table for reading the dialog text.

Both FaceFX and Facerobot are capable of generating lipsync animation, so is not a “system” but rather a feature both have

  1. The only plugin I know for Maya capable of producing lipsync is voice-o-matic by di-o-matic
  2. FBX is the way to go between softwares, no idea why you’re talking about the cube
  3. Yes, joint based or blend shape based, it’s up to you how you want to rig the character
  4. You do the rig in Blender, then go to FaceFX, then to UE4…alternatively you can do the lipsync in Blender
  5. See above

The best advice would be to do this in batch, since you have 58000 words…if you do thatg manually is going to take ages, so I strongly suggest to check all the possible alternatives and see which one is the best solution

Unfortunatly Blender has corrupted the bones in my rig during the loading. (The rig was in a blender file).
but I figured out Papagayos in about 5 minutes. but soon as I loaded up the blender rig, I was given this.

There’s no import options for fixing corrupted rigs inside of Blender files. I guess this means for Blender that lip syncing is
out because I can’t open the mouth up of my rig with the bones all bunched up in the face like this. The bones are all
been rotated the wrong way. I don’t have enough knowledge to know how to do the shape keys for doing the mouth for
the phoenics. I thought you just go into pose mode to manipulate the bones in the face and try to do it that way. but the
guy who did it on blender with the monkey mesh had to move the vertices around on the face. But I have a humanoid mesh
not a simple monkey mesh. But with the bones all wrong like this I can’t manipulate anything.

Rigging a custom mesh up with Mixamo won’t work either, it won’t give me the root bone, they deliberately left it out so that
their rigger only works for their store models, so for custom models, you only get a hip bone and no root bone. Unreal can’t even
target the rig properly without the root bone. So when you go into the skeleton editor you see this big ugly red line where the missing
root bone should be. And if you add in the missing root bone to the skeleton through another modeling program then reimport it, unreal will likely say, the skeleton is now incompatible with the animations because the skeleton has been changed…which means you have to add the root bone also to every single animation in the FBX and reimport everything in all over again.

I have loaded in the mixamo animations for the Axel rig. I Duplicated the Anim BP and rename the copy to my character
rig I added the mixamo animations to the state machine. When I selected my mesh and selected the animation bp to play
for his mesh, and find none of the animations will play in the viewport and he just remains stuck in t-pose.
I Don’t understand, all the mixamo animations play fine in the animation editor and I did hit check ÍN PLACE’ on the anim custom pack
before exporting it out into FBX from mixamo so he moves only on the spot, but they won’t play at all when I select his mesh and then
select the anim blueprint.

So his skeleton refuses to play the animations so I went into the content browser when I went into his state machine, his animations
were NOT EVEN SHOWING UP UNDER THE ANIM ASSET LIST and I thought, huh? why are only the mannequin’s animations showing up in the list so when i try to drag his mixamo animations out into the graph from within the content browser into his anim bp state machine graph then i get an error saying that his skeleton is not compatible with his own animations. So I couldn’t add his animations into the state machine to play them as a result…

But that’s the same skeleton that I used to create the animations in miaxmo !! so the animations should’ve been compatible.

But unreal refuses to let me use the animations for that skeleton…

So it seems to me if there’s no root bone present in the mixamo rig then unreal refuses to show the mixamo animations up
in the state machine list and will only show the mannequin’s animations instead.

Hey guys , Is there any software for making facial animation with audio dialog? :)](https://gamexcommunity.org/discussion/post-1361-new-tv-download-gilmore-girls-s04e07-hd-gamex.html)

FACEFX Studio professional is one program I can think of but it costs $899 for the license and you have to pay that in order to unlock the save and export functions on their software.

FACEROBOT from Adobe its not avaliable anymore because the link is broken.

FACE PLUS (3d Animation capture program by mixamo), only works if you have Unity PRO License (that’s about $125 dollars a month I think) and will not work with version 5 of the unity engine. And I don’t know if that works with a standard web cam or if you need a special web camera for that plugin…

PAPAGAYOS. I wish the Blender Plugin for this program inserted in all the blender shape keys in for you and then you can just tweak the shape keys afterwards and export it out to Unreal Engine… But unfortunately you have to manually create all the basic shape keys for it in Blender before it can use the dat file of your audio and that I can’t do at the moment because of the problem I’m having with the bones in the armature of my army guy face bones all facing the wrong way in my rig when I try to load up my blend file.

Does anyone on here know how to correct the bones in the armature so the face bones are facing the right way? They should not look like this. I do not know how to fix this.

ROOT BONE MISSING IN SKELETON AFTER AUTO RIGGING IT WITH (MIXAMO). Fixed it by following Nannestad’s advice in his tutorial to rename the Armature to Root then re-export. But in doing so it invalidates all other preloaded animations, so once the skeleton has been fixed, then you have to fix up each animation as well to add in the missing root motion.

DAZ 3D STUDIO: I also looked at Daz 3d Studio, but the license is not suitable, it seems you can only use those models only in
other Daz 3d products.

If you’re having troubles with the bones-based facial rig, the best solution would be to use a blend shape driven rig, so that you’ll be able to get the lipsync done and adapt it based on your needs.

A bit ago I made a video showing a custom solution in order to drive animation based on a custom rig, and I guess that you can use a similar solution ( I made it in Maya, you can do that un Blender )

FaceFX: Yep, 899$ to get the full export/save thing

Facerobot: Not Adobe but Autodesk, you get Softimage ( with Facerobot ) if you buy the creation suite from them

Papagayos: Maybe you didn’t get one thing: if you want to create facial animation you need to create the facial rig and the poses for each phoneme; a software that create facial expressions ( joint based or blend shape based ) do not exist, so you have to do that manually.

Mixamo issue: google is your friend, there are countless solutions to do that, I even release a free script to retarget Mixamo animations to the UE4 mannequin

I can insert in the shape keys but can’t drive his face rig yet because the simple truth is this. I DON"T KNOW HOW TO DRIVE
3d humanoid facial rigs with blend shape keys. I haven’t got a single clue on how to set up all the poses for this rig.
I haven’t got all that advanced knowledge for driving 3d animation with those phenome keys. not with a complex 3d humanoid rig
like this army guy is, this is alot more advanced than just trying to drive a little monkey’s mouth. I have limited knowledge with Blender, I do nor know the advanced blend key facial animation pose stuff so i cannot set up the poses myself because I don’t know how to do it with 3d humanoid rigs. too advanced…

Now as for Papagayos, I need to know two very important things, do we have to create new shape keys everytime that we load in
an new audio file into blender?, or can we just use the same set of shape keys for every audio file that we load into Blender
with that model ?

I’ve had his face rig mess problem now for six months its been dragging on since I brought him because he was supposed to be game ready, because I was told this model was game ready, until I found his face rig bones were all in a mess, so how can I set up his face keys
for lip synching if I don’t know how to set up all the 3d face key animations poses to drive those keys for his rig Answer. I can’t. I don’t
have that kind of advanced knowledge of Blender to get his face rig going.

Ok, if you’re trying to create lipsync/facial animation, but you have no idea how a facial rig ( joint or blend shape driven ) works, then you’ll waste another 6 months doing absolutely nothing or just messing around on forum trying to get help.

STOP

Learn how a facial rig works, learn how the Mixamo ( is it a Mixamo character, right? ) facial rig is done, learn how to extract phonemes from a joint driven rig and convert them into blend shapes, learn the logic behind Papagayos is done, and learn how to make your own rig which allows you to get the results you want with Papagayos with one click.

THEN try to test the lipsync in Blender.

If you have no idea about what you’re doing and, if you have issues with the model, you have no idea why you’re having issues, you’re just wasting time, and even if you somehow find a solution that works, if you have a small issue, it’ll become a major issue because you have no idea what is causing that and how to fix it.

Mixamo+FacePlus+Unity works out of the box with whatever webcam, so if you need something quick and dirty go for it, but if you want to get something you have under control ( and understand what’s going on with what’s happening ) read above :wink:

I disagree, It dosen’t take six more months to try to fix the rig. I was given a rig that had a very sloppy job done on the armature and
the facial bones when I was told the rig was game ready when it wasn’t… So It looks like I will have to take it to a professional to try
to fix this. Personally I think the artists need to learn how to clean up their own rigs first before trying to sell their rigs off to me or to others… The reason why I can’t do the pose animations is because a poor shoddy job was done in rigging up the face bones of this rig.
So unless I know how to move the mesh to do the face pose animations with the blend keys then I cannot drive the rig if I don’t know
how to do the facial animations. In other words, it makes it much more harder to try to rig for lip synching if the face bones
on the rig are not useable.

This is the fourth time you try to change your reply?

First you need just an answer, then said that you don’t want to hear lame excuses, then that I need to be professional, now you blame the model you bought.

You have a lot to learn, kid.

Good luck with your project.

I have a professional friend I know of who works in game industry who might beable to fix the rig for me, but because shes works in a professional studio she told me all about how some of the indy studio guys who believe in lower standards and usually those who support lower standards like to use their technical know how as a weapon against others so they’re not always very helpful but I have different standards than this.

Ok so you Blame me for getting a bad rig from the store?, there was no screenshots of the face rig when it was up on sale. There was only screenshots of just the mesh. So I wouldn’t have known the armature rig inside the mesh was all messed before the purchase even though the mesh looked all fine, but the armature was a mess but I wouldn’t have known that not until I loaded the file up in blender and clicked on armature in the outliner and seen all the mess…And unfortunately if you do contact support to try to get the artist to fix their product, the artists do not always answer back to fix the issue. So you lose out and have to find someone with the knowledge to fix it for you.

Mmh…“talk down to me like that just because I have rigging issue”

I’ve had his face rig mess problem now for six months its been dragging on since I brought him because he was supposed to be game ready

You mentioned that you wasted 6 months with this and you mentioned ( in caps ) that you don’t know how to drive a rig, so you realized that without a basic knowledge of the rig you’re stuck.

And you consider an advice from someone who knows how this thing work a “talk down to me like that” because I tried to tell you what to do before attempting lipsync?

I’m done trying to give you advices

I’m not doing it to be rebellious or stubborn or defiant, I’m doing it because of the global threats that are in our times now umm like World War III, threat of global recession, that’s why I decided to skip studying all the basics of everything because I knew doing that would take over a year or two to get done so that’s why I put the issues I was having up on this forum or go through my professional friend to nip those issues quickly in the bud with the rigging or the engine, so for little issues with meshes I’m not charged by her to fix but for issues with rigs like to repair a face rig I will be charged for fixing that so i nearly forgot about him that’s why I didn’t do anything with him for six months until a few days ago I saw him again and his messed up rig and thought, well I need to do something about fixing you up … That’s why hes been on the back burner for six months until i decided to bring up his issues a few days ago.

i do want to get quickly onto game development once those issues with the rigging or the engine have been fixed so I can get a game developed in time before all these other things all arrive at our shores. So I had to take changing global economic factors also into account…So yeah I’m trying to cut engine trouble shooting time and trouble with rigs down as much as I possibly can so I got more time for game development. Otherwise if I just do things the old traditional way, first learn all the basics and by the time I get through learning all of that stuff and then start the game development well it could be too late by then so I have to develop the game now, not wait
until later on…

Another option is to use iClone. Using the lipsync feature in iClone (or CrazyTalk if you started there) is a fairly straightforward pipeline to UE4. Feed it text or audio file, and it gives it a go - then you can adjust.

Unfortunately this is not lipsync on the fly - so you need to prepare every sentence spoken by a character in advance - and export to FBX.

Iclone?, Crazytalk, thanks for that, I will go take a look at them… Well I downloaded Crazy Talk, Ok here’s the deal, Iclone series seems to take all the nuts and bolts out of it so you can adjust the rig now with slider menus instead of messing around with adjusting bones to get the poses because if you adjust bones in blender, you can sometimes trigger off that rotation scale bug.

BUT

With this program, called Crazytalk, it looks great, has nice models to choose from but it dosen’t allow you to even load your own model
without first getting the full license because it only loads just iclone files, iclone models, I can’t buy a program I can’t even test out with my
own model fbx. I don’t mind save or export being disabled on trial versions but I DO mind it when they disable Import so you can’t load
your own FBX model to demo it in the program.

So if i use Papagayos?. Do I need just 1 set of shape keys for my model or do i need to create thousands of them for every audio file my model uses?

Something to consider is the hidden cost of a facial animation solution with in an Indy development environment based on off the shelf products. An option if you have $$ of course is application and hardware that can do the job for you but to be considered in most cases you would have to hire “actors” unless you wanted all of you dialogue to be the same across a broad range of characters.

Personally the best all round solution I’ve fond is using the voice device in Motion Builder the capture an audio track and convert it to animation. An expensive solution but the amortized cost would be close to the same total cost of just a facial animation solution.

https://youtube.com/watch?v=JgcZr4LrFtc

Ok I have looked briefly at Motion Builder by Autodesk, I’m making a non commercial game so I’m not sure how that affects the license if I
decide to buy and use that program. I installed in motion builder, only to find my fbx won’t load up because its not the latest fbx version.
Okay… So I converted it to Fbx 2013 now it loads, hmm it seems the lip synch script or plugin is not installed with the program when I
installed motion builder. And maybe it won’t work also untill my face rig bones are also fixed or redone. So it looks like I have to fix this rig first
before motion builder can even use it.

Ok Crazy talk 8, Its taking me a while to get used to the nav of this program, ok I found new actors, good it finally lets me load in a custom pic instead of only allowing me to load just the library’s own defaults.

Do they have any other teeth than these? They look creepy when I got the guy now talking.