MetaHuman Animator - Exclude frames from footage

Hey everyone,

I have tested MetaHuman Animator and there has been the problem that it completely breaks when the face is occluded by a hand and it also affects quite a lot of frames around the time where it happens. The AR Kit worked way better in this kind of scenario.

Is there any option or option planned to exclude frames in between the start-end-range from being solved and some option to maybe fill in the blanks with the neutral position or the blend between the start and end frame of that gap that is not being solved?

In our company we have to put the hand in front of the face quite often while recording and this sadly prevents us from using animator. We would be happy if that would be made possible or if anyone can help solve it right now.


Hi, unfortunately we don’t currently have any options like this. Thank you for the feedback though and we’ll keep this in mind as we look at future improvements.

The best I can suggest at present is to set your processing range to the areas without the occlusion and process them one by one. We don’t clear previously processed frames when you process a new range and so you could gradually build up to having the whole take processed but skipping those occluded frame ranges. When you export you shouldn’t get keys for those unprocessed ranges and you could then interpolate across them as a post step.

I’m aware this isn’t as nice as specifying some ranges to ignore. If you have people in your company with Python skills though, you could look at using our Python API to automate the steps I’ve mentioned above.