Experimenting with Daz characters and the ML Deformer

Hi :slight_smile: Sorry for the late reply, I just noticed the post.

Basically you generally want to use the Neural Morph Model for now. It is the most stable and tested and has the highest performance.

Let me give a little overview about the models and their differences:

  • Neural Morph Model:
    Introduced in UE 5.1, this is the high performance, low memory footprint model, which works best for most deformations. It will generate a set of external morph targets. External means they aren’t visible as regular morph targets to the rest of the engine, but they are still compressed using the same compression system and are still applied as regular morph targets. The neural network runs on the CPU. It is a small network, and runs very fast. The neural network drives the weights of the morph targets that it generated at runtime. This model can do everything that the Vertex Delta Model did, plus more.

  • Vertex Delta Model:
    Introduced in 5.0, but being replaced by the Neural Morph Model. This used a GPU neural network, but a much larger network. It is much slower than the Neural Morph Model, and uses up to 10x more memory. It is best not to use this. The reason why we kept it around is more as an example to show people how to integrate this with the deformer graph system and how to use GPU based neural networks. The Neural Morph Model can do the same thing as the Vertex Delta Model, but much faster.

  • Nearest Neighbor Model:
    This model is specialized in ML Cloth. The Neural Morph Model can also do cloth, but is more likely to lose some crisp details in some folds. The NN model can preserve those better, at the cost for a bit extra memory and performance. It is still very experimental and documentation is likely missing. It uses the same morphing technology as the Neural Morph Model. So it is pretty performant as well, but slower than the Neural Morph Model.

So basically my advise is, for now, always use the Neural Morph Model.

The NMM (Neural Morph Model) has two modes: Local and Global.
Both have some advantages and disadvantages:

  • NMM Global Mode:
    This requires random poses as inputs. It will use these random poses to learn correlations of deformations. I would say the advised amount of poses is between 10k and 20k, but when iterating and trying out things you can use 5k or less poses too.
    The random poses are needed to also cover the space of all possible poses and rotations that joints can be in. Performance is mostly impacted by the number of hidden layers and number of units in those hidden layers. Generally 128 to 256 units per layers should be ok, with a maximum of 3 layers. Try to see if 1 or 2 layers works fine as well first.

  • NMM Local Mode:
    This works with random poses, but also with structured ROMs. So here you could actually input a range of motion animation that moves the individual joints. Often this is used to test skinning. You have to watch out for some things though, like you shouldn’t animate both the left and right arm at the same time, as then it will learn to correlate those deformations, and it will deform the right shoulder while rotating the left arm for example. We will soon ship an example animation that you can use.
    You can also mix random data with structured data. Also 5.2 will have some further improvements in this mode. You can already get some good results with a few hundred poses in some cases. But typically you want a few thousand poses. The more the better always.

General advise / tips and tricks:
When you see a bad deformation, it is most likely because your training data doesn’t contain enough sample poses where the bones are in that specific orientation. Try augmenting your data around that specific pose.

Try keeping the morph targets per bone for the NMM local mode like between 3 and 12 or so. You typically want to stay below 500 morph targets I’d say. Preferably 250 max.

Start by setting the compression to the lowest value (I think in 5.1 its called Error Tolerance), and the Delta Threshold to 0. Then make things look as good as possible using those settings. After that try increasing both values, especially the Delta Threshold (or in 5.2 Delta Zero Threshold). That will greatly reduce memory usage, but at one point starts creating bad artifacts on the mesh.

In the NMM Local Mode, the “number of hidden units” setting should be about the same as your “morph targets per bone” setting. Keep the number of hidden layers to 1 or 2.

When iterating and trying things out, set the number of iterations to 1000 or 2000, and limit the maximum number of samples to say 5000. This allows you to train much quicker. One you start to get nice results, try with 10k samples and more input sample frames.

If you lose fine details, it is possible that the Delta Threshold value in the training options, is filtering out very small changes. Try lowering it. That’s why you should start with a value of 0. Although that can increase memory usage quite a bit, so you should try to see what value works (the higher the value the better compression).

For your final game version, train with as many poses as possible and iteration count above 10000. There are diminishing returns though.

Hope this helped a bit!

3 Likes