Download

Best way to create dialogue(talking) animations using only Blender/Unreal?

Hi, I am aspiring to become an indie game developer. For the moment, I am working solo and working with a Ramen Noodle budget so I can’t really afford expensive tools to work with. I have probably at least 8 years of experience developing mods for Skyrim and Fallout 3/New Vegas. Looking to develope an actual game sometime in the future. Right now, my goal is to try and learn C++ and to see if I can get a character fully working in-game.

Basically, my current goals are:

  1. Modeling a basic male and female character in Blender with propper UV mapping, rig it to the unreal skeleton and bring it into Unreal.

  2. Create morph targets for the face so that the face can be completely manipulated.

  3. Set up a basic character creation system with in-game morph sliders, so that the player can change the way their character appears before the game starts.

  4. Add all basic movement animations.

  5. Create lip animations that are synced to wav files to ensure that dialogue will work properly.

    What I am concerned with, is how to animate the face properly and how to get them to sync up with a wave file. With Skyrim, you could create lip files. The game would analyze your sound file and create facial animations automatically. I’m assumeing that most people here just record a sound file, create basic lip animations with morphs for different sounds, then create different animations for each wav file by hand in matinee. Is this correct?

    The other thing, is that most people recommend seperating the head from the rest of the body for this purpose. I’ve noticed that Skyrim seems to seperate the head at the base of the neck, but there are usually issues with seams when doing this. My other concern is that if the head is only animated through morphs and the body is animated through a rig, how do you turn the entire head? Whould’t you still need a neck bone?

    It would be great to see a video tutorial on how to go through this process, where to seperate the head, where to add the neck bones… ecs.

Any advice on this would be helpful.

Easy stuff first.

Morphs are additive and applied on top of the already animated rig. so the only way the head would turn using morphs is if the head turns using morph targets.

The reason for detaching the head is to be able to attach it to different body styles with out having to create fit to finish characters. With alpha character systems like Genesis and iClone it’s much easier to create as many different characters that you need with out having to invest the time building them from scratch. The only reason to build characters from scratch is if you enjoy the process but based on your description you would only be adding more to an already massive learning curve.

Using Skyrim and Fallout as the example they both use a simple “talking Mailbox” system where only the jaw bone is animated so attaching an audio driver is rather easy and moves the mouth based on the amplitude. This is an OK approach but procedural based animations, more so for facial animations, tends to fail if used to drive dialogue in real time (just look at the mess that’s Mass Effect). So the top down design direction would depend on if you need characters that can act or just a chatter box.

  1. Very time consuming process process and once again should be attempted if something you have an interest in doing. I would recommend you do not use the Unreal rig as it does not reflect real world proportions nor is the joint locations suitable for use for animations on both male and female forms. At the very least you might want to consider using Make Human if cost is a concern.

  2. Making morph targets is once again time consuming and as mentioned there are Alpha character systems that will do the job for you ( in some cases in a single day). On the other hand if you wish to learn how to do it the hard way that’s a good enough reason but me I would rather learn how to do it the easy way first. :wink:

  3. There is already a few different character systems available so no need to reinvent the wheel. As a player feature this would be desirable as to creating custom characters

  4. You want to do this in game? I guess it would be possible for simple sync but for anything more complex that requires acting it’s best to author the animations using something like Motion Builder or on a budget Blender.
    .
    Reading the post a few times your asking a lot of just two applications that I assume is based on cost considerations and taking on a project from the bottom up direction (going by what you see). .
    As a best practice starting point you would be better off tackling the details from a top down direction (based on what you know) by breaking your current goals into much more direct questions as to the intent and purpose the design needs to serve.

I’ll try and be more direct.

Basically, I need two things:

I need simple system like Skyrim/Fallout where the player can talk to an npc. The npc must be able to respond with mouth animations and expressions depending on their temperment.

I also need an in-game character creation menu, much like Skyrim/Fallout.

I have two concern’s with using iClone. One, is that you have to pay a licensing fee for every asset that you use. I’m not sure what these fees are for characters. I’m also not sure if they would allow me to use the unedited base assets for my in-game character creator.

You mentioned that there are already some character creation system’s in place? Could you point me to one that I could potentially use?

The strange thing about this, is that Oblivion/Fallout New Vegas did this same thing, but they only use one body? The only difference would be male and female, and the game has a male and a female head for the corresponding body?

So since morph is layered over top of an already rigged mesh, you should be able to turn the head with a neck/head bone correct? Then just morph the facial expressions?

Unfortunately, Motion Builder is quite a ways out of my budget. I just need a simple system that allows an npc to talk and switch between expressions on the fly. I’m trying to figure out what the most straightforward method for this is?

Ahh OK so from scratch is not necessary as compared to first establishing a framework that will allow everything else that comes afterwards to function as expected. AKA top down knowing what will work. With an eye towards low cost,cheap, or better still free the “best” option in my opinion is Daz Studio using the Genesis 3 framework.

Daz Studio

Daz Studio is a feature rich environment that has a lot of use as to digital asset management and top of the list is full control over unique character design requirements. The learning curve is steep as to a complete character management system but with out going into the details is well suited as a companion app for UE4. Best part it’s free and zero fees required no matter what your use is.

Genesis.

Genesis is the base that Daz3d uses, as well as merchants selling digital products, to create new and original art as well as a derivative off of the base G3 shape. For example Genesis 8 is the base used to create Victoria 8 which includes a full body and head injector. The only concern is the G3 base does require a interactive license but is only required if you need to included the raw base, excluding rigging, with the project.

If you make a derivative or original digital art that works with the G3 framework then the works is your property to do as you please which includes giving it away or even selling your works as a completed product. Bottom line if you are looking to avoid costs out of pocket it’s possible and our team made the decision to make use of the G3 framework for a lot of the reasons you mentioned DS and G3 is a true out of the box procedural character development package that you can easily build on with out the need to reinvent the wheel. :wink:

Morph targets in general.

Targets are only relevant if transform data has been applied using the master as the reference base. If vertex id 110 has an XYZ value of 000 it’s ignored. If it has an XYZ value of +10,-100,+5 then the offset is added to the current value of vertex 110. This is the how injectors work as part of the G3 framework that allows a single form to take on many different shapes or characters.

For example

https://www.daz3d.com/rawart

Each and every character is based on one of the 4 available Genesis framework

As a suggestion.

By the sounds of it you don’t need the complexity of morph targets, along with all the pain in the … that goes with it, but could make use of cluster facial animations that the G3 framework is already rigged to do.

Guessing at how it’s done in Skyrim I would think that cluster shaping is used as the shaping is done using the same animation channels as a walk cycle, for example, and does not require one follows the per vertex rule of a morph target. One could script lip sync by importing individual shapes.

There are other options available but as compared to the above they are just toys that require a lot more attention to make functional as to design.

I think there is an indie license requirement for Daz studio and you have to purchase the various models I believe? I’ll look into it a bit more. I also expect the polycount and draw calls to be pretty extreme, even for Unreal. The free version is only for the use of 2D art made with the software.

The more I’m thinking about it, the more I am looking into the legality of using iClone for Unreal characters. They do have a plugin specifically for use with Unreal Engine. Anyway, I have contacted them about it. Really the worst case scenario is that I can’t make a character generator and may have to license my models. I do like the animations and to license the animations, it is only $2-3. Software is pretty affordable as well. The problem may again be, that I will need to work with morphs I assume.

Edit: Just confirmed that for the models, you need to purchase a special interactive license for the models that allows for it to be used in a video game. The ones I found are around $50 a pop for characters. Will see how iClone responds.

There is also this: http://www.manuelbastioni.com/

Might make a nice compremise. Trying to find out from the author on what I can do. Looks like expressions are still driven by morphs.

I also found a character creator program for Unreal on the marketplace, but it looks to be fairly incomplete. Current version does not allow for female characters and a lot of folks in the comments are reporting bugs that have never been fixed.

Cool OK now your thinking top down. :wink:

Just to clear up a few points Daz3d sells digital assets. Daz3d has a program that they use to support their products called Daz Studio which is free to use for any purpose you wish. Genesis is the base framework they use to develop other products which is provided free of charge and the only time you need to pay the interactive fee is if you need to included the base G3 mesh and textures.

As to options you can select from 4 different base version of Genesis and for the purpose of a video game the G3 base is well suitable as a replacement to the Epic rig. The base count for the form is 32k tris and 16 polygon. By compassion that’s 10k less than Epic’s Blueman.

Licensing is a bit confusing as Daz3d has just made changes to the license extension requirements but since the origin is based on the Poser format there is a TON of options and alternatives available under creative commons.

If you just want to make use of the framework, aka the rigging base, you can export just the rigging and use that along with Daz Studio to create your own art assets with no additional interactive fees. For our needs this is what we are using G3 for as we are making our own clothing and heads using zBrush and channeling it to DS via the GoZ bridge. Also the guy who started the Make Human project is running the show so I would expect feature sets to be added that are of use…

The ManuelbastioniLAB option looks very interesting as it’s a plug in solution that uses Blender as the host app where the output is covered by creative commons but looking at a few tutorials ready made rigging and UV mapping is covered and since it’s already in Blender making your own morphs should not be a problem.

iClone for our needs was excluded as it’s far to expensive just to kit up with the requirements to to export a ready to use character into UE4 as well unlike Daz3d interactive license required for purchased assets Real Illusion charges an Export license that is required even if you just want to bring the asset into Blender for rendering. Maybe some creative commons stuff to be had but the amortized per character costs is just to high.

Interesting. I’ll take a closer look at the G3 setup. By exporting the rigging data, are you talking about exporting a body/armature and using that to copy bone weights/morphs to a new, custom body asset? Or is there a more direct way to do this? IE: importing my own custom mesh, rigging it inside of Genesis and exporting the new, rigged mesh with the armature out?

I wonder about exporting animations with Genesis. Is that possible as well?

BTW, to clear up another point, the reason I was wanting to stick to using the Unreal armature is for possible use with a plugin called Allright Rig. It allows you to take any mesh that has been rigged with the unreal armature so that you can animate it inside the game engine. I’ve never been a fan of Blender’s animation system unfortunately. Maybe I just need to experiment with it a bit more.

The other question, is, what would be the best way to get animations in-game? The three ways I can think of are as follows:

  1. Create the animations inside Blender or Genesis along side a sound file, export both to unreal and make a dialogue blueprint to that plays the sound file with the animation for each dialogue blueprint?
  2. Create different animations for each mouth movement and make a master blueprint that drives the movement via sound drivers?
    3.Create different animations for mouth movement, then use matinee to combine them and use them inside of dialogue blueprints?

Beyond that, I assume I probably need a master blueprint of sorts to handle emotion states and to allow an npc to track the player or another npc with his/her eyes and head.

.

By default when you import a character model from any 3rd party application you are given the option to assign the mesh to an already import rig. If you leave this option blank the first time the armature is saved as a separate file as well as the mesh which contains the weight table and morph targets as well you can import the textures and assigned materials and generate a physics asset. Using the first import feature you can add all of the the game requirements using the single character framework and instance the parts that are attached from whatever mesh object you import that can use the single framework.

For example if on the second character import you select the first armature/rig imported then only the mesh is imported with just the attached morphs and make use of the single channel approach and make as many different characters you wish but only have to do the animations and ragdoll stuff once and instance the parts as part of a character BP. You could take it a step further and combine all of the head shapes as part of a single head object and create an on the fly dynamic crowd sub-system, say like in Assassin’s Creed, with out having to do unique characters.

.

Yes there are some animation features built into DS that can be export to FBX. Since I use MotionBuilder for our animations I’m not up as to the extent of the tool set but I see no reason why one could not use DS to do simple stuff like cycles and expression sets. For example I used DS to create an expression set of about 30 different shapes using cluster shaping that can be layered per bone for general expressions, happy sad pain, as well as phonemes to do a simple chatterer box system.

.

Well as an oped and as an animator the base Epic rig has some serious problems as a base working from the top down and from the start should have involved a bit of though as to how the over all design is going to paint everything that comes afterwards into a corner unless the need is for 6 foot 3 robots so for our needs we went with Genesis as a “replacement” for the Epic base rig.

As an opinion the Epic rig as a single point asset has created a development pathway that for most off the shelf solutions ends as to the need for more AAA type character actors as to extend into more complex designs. I could go into a rant but if anything I think it’s time to replace Blueman with something a bit more advance as it’s current use really limits any BP based on the design and really needs an update.

Bottom line, and with due respect, Blueman is a really really bad single point starting point and should never be used as part of a modular top down design. Unless you need 6’3 hunched back monsters that can not even talk or express.

.

To be honest I don’t know about that stuff and have no need to learn it as I mentioned we use MotionBuilder and is possibly the number one reason why AAA studios no longer have 300 animators but maybe 3 or 4. It’s a case of letting the software do the hard work.

  1. MB has a voice device that allows sound samples to drive the facial animations so if I need a Genesis character to talk I just pull in a template, add the dialogue track, and plot the result just like any other kind of animation requirements. The annoying thing is UE4 tends to blend everything so the audio tends to go off sync so there is a timing issue I have yet to figure out.

  2. Animation data is animation data be it a walk cycle or facial animations but based on experience attempting facial animation as a procedural event lands up being goofy as to a result (Mass Effect comes to mind). Once again MB can save you a lot of effort by keyframing the dialogue to a sound track and then use UE4 to trigger the animations just like any other event be it run time or via sequencer. Using a sound driver in UE4 would be the last thing I would go with as it only solves the bottom up problem (aka make it work today) rather than solving the problem top down at a system level.

  3. Sure it’s possible but as an animator the process sounds more like trying to write a book using a spreadsheet. Sure you could but why would you?

Opinion wise once again the root of the problem is not in the bottom up direction of solving the problems using patch work solution but by first starting with a single point solution that EVERYONE agrees to use as the foundation by which more complex designs can be created using a modular approach which UE4 is already very good at. :wink:

As a suggestion, if starting a project form scratch, serious consideration needs to take place as to the foundation development of a framework solution that solves the problems you might have, and you know will work, rather than doing patchwork that only solves the bottom up problem. :wink:

So first question anyone???

What makes for the ideal character base as a single point solution? Once you have a solid foundation everything else follows.

Again, remember that I can’t afford such tools as Motion Builder. I wish I could, but I am a 34 year old cashier in a small town, that has a 3 year old to feed and take care of. I make about $8/hr and get less that 40 hrs/week. I am stuck because I need help from my family. Trying to save even $20 a month is difficult.

I’m taking a shot in the dark to see if I can make a proof of concept build and trying to teach myself a little C++ before put myself thousands more in debt going back to college.

Top down, I’m not expecting perfection. This is any indie game I will likely sell for maybe $5 a pop at most. I don’t expect perfect characters or perfect art, especially in early development stages. I’m trying to find the simplest solutions I can to resolve some of the issues. I’m not looking for AAA quality.

Don’t you just re-target the armature for your character? I’ve watched people re-target the bones for much smaller models. The guy that made that plugin for example, re-targeted to match a child but as I understood it, it sounded like he was claiming to be using the unreal armature. Perhaps I just misunderstood him? Maybe I have this backwards? Maybe he had a custom armature that he re-targeted to the unreal armature? It’s been a while and I haven’t tried this myself yet.

The general concensus I got from asking this very question a few months ago was that you should always create your own character base meshes from scratch. Even then, I did not know that there was any other way to create facial animation other than morphs. So this would not have been the right question to ask anyway as a starting question. I need to understand what I am looking for in a base mesh first.

ManuelbastioniLAB, is the option I’m looking at right now, but it looks like everything is done through morphs. It turns out, this looks like my best overall solution. It looks like I can build the editor I need with his models, however I’m reaching out to the author again to confirm if I can legally modify the exported models. This is so that I can add mouth clusters or at the very least a bone to control the jaw ala talking mailbox, and morph targets for the eyes.

I read one of your posts you made in 2015, and you mentioned that Unreal does not use the GPU to render morphs and it depends entirely on memory. Is this still true, or has this been fixed?

[Edit]

Something I think I missed earlier. Are you saying to just develop my own character mesh, then use the G3 rigging? I was assuming that you were talking about using the G3 base mesh as a whole. I thought the rigging/armature was considered as part of the base mesh? I’m also not sure how one would go about only applying just the framework to a fresh mesh. In that case, I wonder about generating a mesh from Manuel’s plugin, and then trying to apply the G3 framework to it? Again, may be legal problems with doing this.

As a poor man’s solution, and I am trying to avoid this, I could treat dialogue as a text-based adventure. Instead of having animations, you just use dialogue boxes much like Morrowind did. The problem here is that you will receive orders over comms at times when you are in a vehicle in-game. It might feel odd to have audio files while you are in a vehicle and it becomes a text based adventure when you are talking to that same NPC in person.

Don’t know why. In 2018 the tools and assets are there for a bunch of kids working out of their bedroom to produce an AAA result using those resources and if work smart produce a result equal to a top shelf game development company. Unreal 4 for example is an engine that once would cost you a million dollars to license which for the most part is free until you start making some money. All that is required is time and effort to learn from the top down. :wink:

Re-targeting is just the process of renaming animation set using one naming convention over to another and adding this as a feature in UE4 is just a convenience for the have nots. Using MB, expensive I know, one does not really rename, re-target, but rather characterize the animation data to a standard and save the result. This way there no difference what rigging is being used, being Mixamo UE4 BVH or any other source, and it’s a simple matter of drag drop and plot. Working smart it’s kind of like making a web page where one can mark up elements with out knowing what the under lining HTML code is doing or even knowing what any of it means.

What would be nice is if UE4 could save a template that can be saved. For example from Mixamo to UE4 so that the file can be shared. :wink:

I figure another 5 years or so re-targeting will go the way of the VHS tape. :wink:

Well there is working from scratch and then there is working smart and working smart means letting the software do the job that it was designed to do. Once you have the perfect base shape, the perfect rig, the perfect UV mapping and the perfect morph targets there is no reason one can not use that base to create any character actor you wish with out having to reinvent the same wheel over and over again.Working smart usually means working faster which leaves more time to do things perfect that one would have to surrender due to the lack of time.

For your needs this would be a very good starting point as you could accomplish the top down requirements that you could using Genesis but just from a different direction. Checking the licensing the base assets is under GPL where the output is covered under CC0… Good to go as in what you make you can sell.

Just to confirm yes it’s a hard decision to make, we spent two years deciding on Genesis, as once you make that decision you will probably have to live with it for a very long time.

Morph targets are now GPL rendered and no longer a performance hit.

To make use of the G3 frame work you have to make use of the G3 base mesh. The licensing says though that if you make a derivative using the base mesh then the derivative becomes your property to do as you please with out any additional fees required.

As an exaple if you export the G3 mesh to Zbrush, and use the tools to make a morph target, and change the base G3 shape by a large margin and in the process create 100 unique shapes then as a derivative all 100 belong to you as property. Think of G3 as a fancy box primitive that happens to have arms and legs. :wink:

The flip side is one only needs the G3 base rig and naming convention, which can not be copyrighted no more than Ford can copyright the wheel, and even says in the licensing that Daz3d encourages this as to making new products that you can give away for free or sell if you wish.

As a heads up though the reason we went with Genesis is because Daz Studio is in the millions and millions of downloads but has not really been used as a games development tool until 4.15 and the introduction of G3 so it’s complexity has a high learning curve. To try to fill in the gaps I’ve sent a few e-mails off to Daz3d asking for permission to put together a UE4/G3 SDK so the framework can be tested in hand.

So it sounds like you are taking the G3 mesh and then re-shaping it in an editor to your needs I take it? Interesting idea actually.

Since morph targets are gpu rendered, and the tool I want to use, already supports this,I may go this route actually. I really like the combinations of characters I can make with the Manuel’s plugin and it seems I can really do anything with the model according to him, so long as I don’t reverse engineer or try to export everything to my game. Fair enough.

I wan’t you to know that I really appreciate your advice and I am taking it to heart and thinking about it. I know it takes time out of your day and your workflow to come in here and talk to me. I may send you a PM once in a while if you are okay with that, for advice.

Something that may help you understand where I am coming from is to understand my current situation. Basically, I started out at 15, wanting to develop my own game. During my high school years, I started modding different games and began looking into some game engines, but found out that most decent engines that would do what I needed them to do,would cost millions and I wouldn’t have all the tools or skills I needed for the job. I looked into going to college that had a course in game design, but found out that even if you did graduate, competition was very high and only the best would make it, and end up making games that followed the same format over and over again because that was what made money.

Something that may help you understand where I am coming from is to understand my current situation. Long story short, I’m a 34 year old that makes $8/hr and has a three year old to feed/take care of. I can afford $10/night to feed my family and I can afford to fill my gas tank only about half-way. Saving just $20/month is difficult and most of that is used on upkeep of my current rig, fees for programming courses, and may also be going to pay a monthly fee for Substance Painter. I put myself through college so that I could go into IT but after years of working in the field, found that it just wasn’t for me. I also had an accident a few years ago that put me in my present situation. I’ve been struggling to find another field to go into since. Honestly, I’ve wanted to go into game development since I was 15 but I didn’t have access to an affordable game engine. Since we have Unity and Unreal now, it makes this a possabilty. So, I thought I might give it another try.

For now, I’m a solo dev. I’m handling all 3d models, all programming/blueprints, music, sound, and animation. Honestly, I’m a bit of a jack of all trades but a master of none. Animation/programming are my weakest points but I’m putting myself through a course on C++ programming for games. This is why I’m not expecting AAA quality and more focused on good gameplay and fun game mechanics as apposed to superb animations/graphics/visuals. I want to make good visuals, but maybe not top notch AAA quality. The important thing is that visuals look consistent across the board. I’m also going with a sci-fi theme to give myself more free-flow over the art.

To put this into further perspective, lets say I need to make a door. I could make a fantastic door. The best door ever made. I can make the lock turn, I can make all the screws, I can make the knob turn, I can give the door physics, make the hinges, I can detail out each componenet of the door that was used to make a door in real-life. I could spend weeks on just that door. OR, I can make a basic door, and have it open/close. It is just a door after all. Give it enough detail that it has the illusion that it is real. But as a player, I’m not really going to examine the door all that closely when I’m being chased by a monster. I can also just a make it look like a nice enough door and, hey at the end of the project and when everything is put together, I can always add more layers to make it look even better.

My goal right now is to make a greybox representation of all the basic, primary game mechanics that I will need, and have them working fluidly. Basically, these are things that are core mechanics of the game I want to build and I want to make sure I can get over each of these hurdles. If I can put these mechanics in a greybox prototype that I can use as a visual presentation, I may be able to garner the support of my family, so that they feel comfortable backing me, and maybe I can inspire some help within some of the communities that I’m involved in. Then I can think about building a team. Who knows, if I make something solid, maybe I could even get help from an animator with access to Motion Capture. But I just don’t have access to it, so I need to do it the hard way until I know I can trust someone else to take care of that part for me. If things go well enough, and I manage to raise some funding, maybe I can afford better tools. But for now, I need to focus on what I know I CAN do. Most of all, I need to prove to MYSELF that I can handle it.

If I fail, then maybe I can take up software engineering, but at least I tried. :slight_smile:

Having npcs that can talk to the player and customizable characters, are two of the most important, core mechanics of my game. As long as I can get a basic talking animation in-game (I don’t need it to be perfect), and I can make them without something like MB, I at least know that I can make that mechanic work for now. Maybe I can get an animator later on to animate the lip files, or maybe I can get some funding for better tools if my concept is good enough. But I want to prove that I can make it work without MB first, just in case. So that I, myself know that I can do it.

If I can take all the base mechanics and the overcome the hurdles that are most important and build a good framework for these mechanics that work, and I can get a greybox proof of concept done, then it will give me the confidence I need to start building top down. :slight_smile:

Try and understand from my post here, that I am not trying to make excuses for myself. I’m done with all of that. But I do have budget limitations. So, I’m going to take a different approach and work with what I can do, and with the tools that I have. I can detail it later.

So, the next question, since MB is not affordable to me, is there a recommended software solution I can use to animate morph animations with sound files by hand?

Not an original idea as re-shaping is the base by which all character generators have been using for years and the first time I got interested in this direction was in the animated film “Final Fantasy: The Spirits Within” where a custom app was used to generate all of the back ground actors off of a single base. The next time the idea showed up was in “Lord of the rings” where Weta, who did a lot of the effects and crowd simulations, developed a crowd simulator called “Massive” which could generate an army of unique actors.

I wish people would stop saying they can not produce so called AAA results as it’s just a word that describes a result from a bottom up perception.

Best explanation of top down bottom processing.

https://www.youtube.com/watch?time_continue=266&v=aJy5_p_LAhQ

So to solve the problem from the top down animation of anything is nothing more than the changing of a variable over time and in some cases space so any 3d application, like Blender, could be used to animate morph shapes, to a sound file, that creates a usable result. The result can then be imported in to UE4, a different environment from Blender, and will work as to expectations as it did in Blender. As a raw asset the works can then be used to create a bottom up result as to the perception of what you see. :wink:

After that things get really interesting of how you can do the same thing but easier and faster as you now “know” :smiley:

I just want to drop in and say that this is a very interesting post that I wish I had come across sooner because the discussion here would have helped me get over few roadblocks in figuring out the ideal workflow.
My targets were pretty similar to the OP but this is the path I’ve chosen after a few weeks of R&D (aka trial and error):

– Use MakeHuman to create the base model. I considered ManuelBastioni and Adobe Fuse but neither had the ability to create children or elder models which was a requirement for my game. MH’s default skeleton has facial bones and interfaces well with a lot of other rigs.
– Import to Blender and spruce up the topology adding detail if necessary and export to FBX and to Maya.
– Maya’s HumanIK tool has been a lifesaver as I can retarget animations from Mixamo’s catalogue as well as CMU’s Mocap repository (The Motionbuilder-friendly BVH Conversion Release of CMU's Motion Capture Database - cgspeed) to my MakeHuman skeleton
– For lip-sync, the MakeHuman exchange (MHX) add-on for Blender has the drive “visemes” which can allow you to shape mouth movements according to phonemes in your audio. You can do this more directly by importing a Moho file (.dat) from a free-version like Papagayo (Papagayo) which can allow you to create your own lip-sync data.
– I combine these lip-sync animations with body-talking animations by blending them directly within Unreal.

Hope some of this is helpful to anyone reading this.

I would love to be able to add facial animations to my dialog in my game for my characters. I only have one model that has a face rig on it at the moment.