I am aware this is a fairly ambiguous question and I’m not looking for a one trick solution. I am after a set of steps or processes that I would need to learn and an understanding of the best ways to create this.
What I am attempting to create is a music video using a routine that I can motion capture and animate a 3D character with as well as capture the face to sync the vocal to.
Any suggestions of how or where to start with this? I think stylistically it doesn’t need to be perfect as you want it to be apparent that it is a 3D world.
I’ve attached some photos of the vibe I’m after when it comes to character creation. I would be basing the face off of a real person.
Hey there @SPYDRFAE! Welcome to the community! So depending on how much you’re going to want to dig into this, as modeling, UVs, rigging, motion capture, cleanup, and application are all incredibly in depth areas, each with their own specialists involved. I’ve got a couple of questions and then I can try and compile you a path forward.
Are you intending to fully have 100% control and are already somewhat capable (or willing to learn from scratch) of working in a 3D art environment like blender/Maya/3DSmax etc? Or would you prefer a more out of the box solution with less individual control?
Are you going to have proper full motion capture data or would you be using more common equipment like an iPhone for face capture?
I ask these questions because the learning paths will differ extremely in both time to put in and how good you’ll be able to get it looking.
set up and take pictures of the model:
A. in various key poses - T, A, arms up near ears, etc.
B. One set for meshroom to get a texture out of, this is done by taking shots from top - center - bottom while moving 360 around the model, in 15deg or so steps. Light is very important to this. You need flood from above and sides to get even light and no shadows.
load up the 2B step into meshroom and wait a day for it to process out. The end result is going to be trash btw, be aware of it.
take the mesh model generated by meshroom to blender and create a new model.
A. Make sure to use a decent vertex density.
B. Make sure to cut seams and unwrap the UV as a one piece for the whole body. Head is separated.
C. Export the model.
import the exported model into meshroom (you have to hack into the files of the project to replace the model meshroom generated).
A. Run the texturing again after removing the model generation node. This will texture you UV with the pictures getting you a good texture output thats not trash.
Get a Zbrush license, load up the model and texture. Learn to use Zbrush.
A. Add detail. Add polypaint. Bake down the end result.
B. Add more sculpting detail and bake it down to a normal.
C. Export the lowest detail model and the baked pictures out.
import in unreal.
create materials
A. Proper setup for skin (follow old standards/tutorials from digital humans).
B. Eyes with proper setup also part of digital humans.
import into blender/whatever you know. Generate hair cards.
A. Create a UV for all the cards in one UV.
B. Create a flowmap for the cards, bake it out to a texture for engine use.
C. Probably design and bake down hair chunks to use for texturing. The blender hair system works great for that.
import the hair into the engine and add it to the character.
go back to blender and add weight-paint and armor to what has so far been a static mesh.
This is a multistep process which will take days and require months of learning. Lets breake it down.
A. Generate a skeleton or re-use unreals skeleton. Edit it to fit the model.
B. Parent it with the auto paint mode.
C. Start moving parts and with mirror mode on paint at .25 strenght to add/subtract where things don’t look nice when moving (knees, hip/groin/■■■, pits, elbows, wrist, neck).
Import in engine and test out by retargeting a mannequin animation to see results.
Back to blender. This time you will need to add Rigify addon, pop in a human skeleton. Sever the “face” portion out. Scale it to fit the model. Merge it to a copy of the skeleton you use.
A. Animate the 52 blend shapes needed for AR Kit. One per frame.
B. Bake down the keyframes into morph targets with the proper naming.
Import into engine and test by connecting the livelink face to the animation BP.
Mocap.
Any of them will stream to the engine. Set up for streaming on ypur mocap solution. Setup for receiving the stream via livelink in engine.
Test a performance.
Looking good? You done.
Looking bad? Go back to whatever step needs fixing (likely weightpaint).
Clothing.
I do em in blender. You can use dedicated solutions (marvelous) but not sure why you would pay extra at this point.
A. Learn how to make clothes IRL - patterns, sewing etc.
B. Replicate a pattern that fits the character in blender. Make sure to do the UVs first.
C. Stich and simulate cloth in blender with the proper addon (can’t remeber the name, its free, and its pretty great).
D. Once simulation is done merge all the assets into one mesh. Texture it.
E. Parent it and Transfer wright paint from the armature onto the cloth.
F. Play with in pose mode and adjust weightpaint to avoid issues.
G. Cut away the character mesh from under the cloth. Export and import in engine to test.
All in all, it will take a pro about a month and a beginner about a year.
No one will pay you for doing this professionally as much as they should. Artists for AAA game characters make a fraction of what they should be making while having to know and maintain a set of skills which is greater than michelangelo’s painting of the sistine chapel with out any recognition what so ever.
And then there comes metahuman creator along further diluting the fact that in order to create characters you have to acrually posses said knowledge…