Body
Hello,
I am currently using the ML Deformer – Detail Pose Model to train cloth deformation, and I am trying to better understand and control cooked asset size and memory usage (both during training and at runtime).
At the moment, I am focusing on how the training data and training settings affect:
-
Cooked ML Deformer asset size
-
Main memory usage (Editor / runtime)
I initially assumed that the number of training iterations would have a direct and predictable relationship with asset size and memory usage. However, after testing different values, I am seeing results that seem counter-intuitive, for example:
Training with 1k iterations produces a larger cooked asset size and higher main memory usage than 10k iterations
- In some cases, training with very few iterations (even 1 iteration) results in higher main memory usage than training with 10k iterations
Because of this, I am currently unable to identify a clear relationship between:
-
Training iteration count
-
Training data
-
Final cooked asset size
-
Memory usage
I would like to understand:
-
What actually determines the cooked asset size and memory usage for the Detail Pose Model?
- Is it driven mainly by network architecture, number of bones, number of vertices, PCA settings, or something else?
-
How does the training iteration count affect the final model, if at all, in terms of memory and asset size?
- Are early or under-trained models storing additional data or fallback buffers?
-
What is the recommended way to control or reduce asset size and memory usage for ML Deformer cloth setups?
-
Are there specific settings that should be enabled or disabled to reduce memory usage?
-
Do I need to manually clear or delete any cached training data (DDC, intermediate assets, or training artifacts) to ensure the cooked asset size reflects the actual trained model?
Any clarification on how these systems are designed and how best to optimize them would be greatly appreciated.
Thank you very much for your help.


