Experimenting with Daz characters and the ML Deformer

I’ve been experimenting with a workflow for getting Daz characters into UE5.1 and using the ML Deformer, without having to deal with all Daz’s corrective morphs and pose drivers.

So far the results have been pretty good.

I know it’s all still experimental and so there’s very little documentation around, but I would love to hear from an expert (perhaps a dev) about the use cases for the different training models and deformer graphs.

Is one model/graph better for some use cases compared to another.

For example, I’ve noticed that the finer details, for example around the eyes when looking left/right don’t seem to get captured in the training process. Is there anyway to capture those finer details?

Basically, does anyone have any deeper insights into the ML Deformer, models and graphs beyond what’s mentioned in the docs?

Thanks

1 Like

Hi :slight_smile: Sorry for the late reply, I just noticed the post.

Basically you generally want to use the Neural Morph Model for now. It is the most stable and tested and has the highest performance.

Let me give a little overview about the models and their differences:

  • Neural Morph Model:
    Introduced in UE 5.1, this is the high performance, low memory footprint model, which works best for most deformations. It will generate a set of external morph targets. External means they aren’t visible as regular morph targets to the rest of the engine, but they are still compressed using the same compression system and are still applied as regular morph targets. The neural network runs on the CPU. It is a small network, and runs very fast. The neural network drives the weights of the morph targets that it generated at runtime. This model can do everything that the Vertex Delta Model did, plus more.

  • Vertex Delta Model:
    Introduced in 5.0, but being replaced by the Neural Morph Model. This used a GPU neural network, but a much larger network. It is much slower than the Neural Morph Model, and uses up to 10x more memory. It is best not to use this. The reason why we kept it around is more as an example to show people how to integrate this with the deformer graph system and how to use GPU based neural networks. The Neural Morph Model can do the same thing as the Vertex Delta Model, but much faster.

  • Nearest Neighbor Model:
    This model is specialized in ML Cloth. The Neural Morph Model can also do cloth, but is more likely to lose some crisp details in some folds. The NN model can preserve those better, at the cost for a bit extra memory and performance. It is still very experimental and documentation is likely missing. It uses the same morphing technology as the Neural Morph Model. So it is pretty performant as well, but slower than the Neural Morph Model.

So basically my advise is, for now, always use the Neural Morph Model.

The NMM (Neural Morph Model) has two modes: Local and Global.
Both have some advantages and disadvantages:

  • NMM Global Mode:
    This requires random poses as inputs. It will use these random poses to learn correlations of deformations. I would say the advised amount of poses is between 10k and 20k, but when iterating and trying out things you can use 5k or less poses too.
    The random poses are needed to also cover the space of all possible poses and rotations that joints can be in. Performance is mostly impacted by the number of hidden layers and number of units in those hidden layers. Generally 128 to 256 units per layers should be ok, with a maximum of 3 layers. Try to see if 1 or 2 layers works fine as well first.

  • NMM Local Mode:
    This works with random poses, but also with structured ROMs. So here you could actually input a range of motion animation that moves the individual joints. Often this is used to test skinning. You have to watch out for some things though, like you shouldn’t animate both the left and right arm at the same time, as then it will learn to correlate those deformations, and it will deform the right shoulder while rotating the left arm for example. We will soon ship an example animation that you can use.
    You can also mix random data with structured data. Also 5.2 will have some further improvements in this mode. You can already get some good results with a few hundred poses in some cases. But typically you want a few thousand poses. The more the better always.

General advise / tips and tricks:
When you see a bad deformation, it is most likely because your training data doesn’t contain enough sample poses where the bones are in that specific orientation. Try augmenting your data around that specific pose.

Try keeping the morph targets per bone for the NMM local mode like between 3 and 12 or so. You typically want to stay below 500 morph targets I’d say. Preferably 250 max.

Start by setting the compression to the lowest value (I think in 5.1 its called Error Tolerance), and the Delta Threshold to 0. Then make things look as good as possible using those settings. After that try increasing both values, especially the Delta Threshold (or in 5.2 Delta Zero Threshold). That will greatly reduce memory usage, but at one point starts creating bad artifacts on the mesh.

In the NMM Local Mode, the “number of hidden units” setting should be about the same as your “morph targets per bone” setting. Keep the number of hidden layers to 1 or 2.

When iterating and trying things out, set the number of iterations to 1000 or 2000, and limit the maximum number of samples to say 5000. This allows you to train much quicker. One you start to get nice results, try with 10k samples and more input sample frames.

If you lose fine details, it is possible that the Delta Threshold value in the training options, is filtering out very small changes. Try lowering it. That’s why you should start with a value of 0. Although that can increase memory usage quite a bit, so you should try to see what value works (the higher the value the better compression).

For your final game version, train with as many poses as possible and iteration count above 10000. There are diminishing returns though.

Hope this helped a bit!

5 Likes

Thanks you so much for replying and I apologise for not responding sooner.

For some reason I didn’t get any notification that anyone had replied to my post.

This information is very helpful and will no doubt help others who are also experimenting with this exciting new tech.

@mrpdean if you’re still testing with the MLDeformer and Daz characters, I’ve added some support to the bridge. Daz to Unreal – MLDeformer Support – David Vodhanel
There’s a good chance you have a custom workflow already though.

@JohnVanDerBurg thanks for that info. It sounds like I should be generating a specific set of animations instead of random animations for this case. The Daz characters come with movement ranges for the joints, so I use that to simplify the interface.
Given something like a shoulder, I think I would need to generate combinations of up-down, forward-back, and twist. Should I be trying to hit a lot of different angles for each of those axes, or would just a few work?

Hi, @David_Vodhanel
I tried your program and it was very fast and effective.
The chracter on the left in the picture is trained.
The right one is driven by JCMS.
Knees and elbows are doing fine now.
I only used 1000 poses.
If possible, I hope you pay more attention to this pose. Lower body bulges may disappear completely.

Thank you!

@mrpdean
I think JCMS stating width “face_ctrl_” should be preserved.
They can be generated by the program itself and are stable with the effect.
Good luck!

@David_Vodhanel
If it can iteratively train, the program can have this option to generate more poses specifically for keys that are not well trained.
This will save a lot of time.

1 Like

If you use Local Mode in the Neural Morph Model you can use animations that move individual joints. Just be sure to not do say a left arm and right arm at the same time, because then it will learn a correlation between them, and that can cause the left arm to also deform when rotating the right one.

You basically want to provide the training algorithm with many configurations that the joint can be in.

We hope to improve this as well, so it will also work fine if you move both arms for example. That will allow it to be trained better on captured 4D data etc. We’re trying to reduce the number of training data required.

The more angles you have the better the results. But at one point you get diminishing returns, so adding more won’t help much anymore. It is hard to tell exactly how many you need. This will require some experimentation. The trained system can interpolate. But the interpolation will not be so good if there are too few angles covered.

This is one of the things we will be looking into in the future for sure :slight_smile:

We will definitely try to improve the workflows/iteration speed.
Our first major goal is to accept easier to create training data by for example accepting existing ROMs, and to reduce the amount of training frames needed.

Hey, I am trying to understand what are the use cases for this plug in. The end goal of this is to attach a component to our characters so that we would have better skin textures on the meshes?
Sorry I am quiet new to all these and just trying to understand it as much as I can :slight_smile:
Thanks!

The end goal is to have better, more realistic skin deformation as the characters joints rotate through different poses.

Nothing to do with textures really.

Hey @mrpdean , thanks for all of that !
Is it possible to make a tutorial, and if possible with a character creator 4 character ? Kevin for example, thanks a lot !