Hi, I was hoping to get some guidance on some optimisations I’m looking into for Metahuman Animator.
The aim is to reduce the Disk Space + Memory size as well as runtime performance of RigLogic. With the main motivation being to allow Metahuman Animator-generated facial performances to be used for crowd NPCs’ dialogue, banter and barks.
Metahuman Animator produces anim sequences that drive facial animation through curves which are parsed through RigLogic to drive the bone positions and morphtargets, which makes them much more Disk Space and memory-efficient than normal anim sequences which have Animation Track data instead of only Float Curve data. Though this comes at some runtime performance cost from RigLogic actually parsing the curves.
However, MHA still produces a lot of float curves still that maintain a consistent value of 0 because the controls aren’t actually being animated. Removing these curves doesn’t seem to break the animation and further reduces the Disk Size/Memory Size of the asset.
Is there a reason to keep these seemingly useless curves around when they’re not actually animated? Would removing them through an AnimModifier or editor utility be a viable method of further reducing disk and memory sizes of the animSequences?
With enough animations these can become sizeable Disk Space savings.
Secondly, removing unneeded curves should also positively influence the costs of RigLogic as you’re feeding it less data to iterate through.
Is there an out-of-the-box way to remove curves through the AnimGraph? I could only find ways to override curves but this doesn’t actually ‘remove’ the curve from being passed into RigLogic.
Although mesh LOD-ing reduces costs of updating bone positions, the RigLogic costs are unaffected as long as it parses all curves present in the facial animation.
To reduce RigLogic costs I’m considering the viability of LOD-ing the curves themselves. So based on the predicted LOD (or through Significance Manager’s calculated significance) you could filter out curves that don’t do much at larger distances.
The idea would be to have e.g. LOD 4 remove all curves except ctrl_expressions_jawopen, at LOD 3 we also pass through ctrl_expressions_eyeblinkl and ctrl_expressions_eyeblinkr etc. etc. Basically LOD-ing our curves to reduce the work for RigLogic to actually parse them.
Additionally, it seems Livelink and stADA isn’t designed for use with crowds of NPCs as the system relies on pre-configured Livelink Sources. The blueprint exposed API doesn’t allow for creation of new Sources at runtime.
Is there future intent that streaming audio could be used for simultaneous lipsync of crowd NPCs?
This could avoid the hassle of needing to maintain 1000s of assets for all lipsync in all languages of NPC barks and banter.
So far it seems streaming audio based lipsync is mostly not intended for use at large scale.
To recap, my questions are:
- is there a need to keep seemingly pointless Anim curves on MHA-generated lipsync?
- does LOD-ing curves to reduce RigLogic costs make sense?
- is there a built-in way to remove curves in animBP?
- is Livelink/stADA intended for current or future use of multiple characters doing streaming-audio based LipSync simultaneously?
Thanks for reading through all that and I appreciate any help you can provide! ![]()
Btw, Matt Lake’s comments on Content Auditing in the UnrealFest talk on Character and Animation Optimization helped me a lot in realising the benefits MHA’s curve-based approach provides for disk size and memory
Great talk!
[Attachment Removed]