Metahuman Animator Production Feedback

I’ve been using MHA quite a lot over the last month and got some questions and feedback in terms of production workflow and working with lots of takes.

Firstly I think I might have found a fairly large workflow bug, when you export the animation directly from the performance file I assume it’s doing some kind of head stabilization as part of the export.

I’ve noticed that it looks a lot worse than if you export to a level sequence then export from sequencer as an animation. It still mutes some of the performance but no way nearly as badly as going direct. I can share some examples but the direct export is pretty unusable atm.

Another question I have which I suspect will be a no is, is it possible to fix the tracker and re-run if it gets confused. We are using TP stereo HMC but the tracker often gets the jaw line wrong. Rather than re-animate the jaw it would be great to be able to fix the frame and re-track?

On this point is there a recommended place to fix the animation. The curve editor seems like the best place but it’s only present in the animation editor which is missing the reference footage and the control board. Basically I’m finding it really hard to know which channels to work on. It also doesn’t help that the animation editor doesn’t support the timecode range, it always starts at 1, so I’m having to flick between the performance with TC frame which has the reference and control board and then the animator editor which starts frame 1 to make any changes.

I’m keen not to have to do keyframe fixes in Maya and would love to stay in engine if at all possible.

I’ve also noticed that projects can get very large when importing multiple stereo HMC footage. It would be great if as part of the import process you could set a frame range, maybe to include handles if the tracker requires them. This way it would only ingest the footage you need. I know you can not import the footage into the project but we work as a team using Perforce so having it in the project is ideal.

There are a couple of smaller niggles that I would like to raise.

When exporting a level sequence it always defaults to the whole take, it would be great if UE would remember, I’m not sure why you would want to export anything apart from the selected range though.

It seems a bit counter intuitive to have a capture source for each take. Would it not make sense to have a single capture source that can point to multiple folders.

Thank for any help and sugesstions

Tim Doubleday

Hello there. Could you fix the head floating everytime we use metahuman animator ? thanks

Hi Tim. Great to hear that you are using MHA :-).

I’ll dig into the workflow bug that you have found, and also respond to your other points, and/or take some of them away to get answers from other people.

Can you clarify the details of the workflow that you are using in order to reproduce the animation sequence vs. level sequence export bug, and how you are comparing the results between the two? What export options are you using in each case? We would obviously expect the facial animation results to be identical between the two, although head movement may be different depending on export options chosen (see below!). If there are differences, then clearly there is a bug.

In terms of head stabilization, there are a number of options on export, and these are presented slightly differently for the level sequence and the animation sequence export. Which approach you would choose to you depends on your exact use case and how you intend to combine face and body animation. Essentially, there are two possible approaches to Head Movement. Both of these can be previewed in the Performance asset, and then exported if required. The first approach is Head Movement Mode = Transform Track. This simply calculates the best best rigid transform of the head to match the input Footage Capture Data. This will give you the best visual match between the reference footage and the resulting animation, but the rigid head transform is not typically so useful for export purposes (although it can be exported if you need it). The second approach is Head Movement Mode = ControlRig. In this case, the neck and shoulder movement is solved via ControlRig, by default assuming that the actor’s torso is facing the camera. The resulting facial animation will be identical in each case, but in the second case, overlay of the results onto the original reference footage will be slightly less accurate.

In terms of Use Cases for the different types of Head Movement, typically if you are using a head-mounted iPhone or stereo HMC, you would not need or use the head movement from MHA, so would probably choose not to export it. If you are using a non head-mounted iPhone or stereo pair, the head movement results may be useful, and you would typically use ControlRig head movement.
How does this map onto the export options? Well for Animation Sequence export, if you tick the “Enable Head Movement” checkbox, then ControlRig head movement will be exported, otherwise, no head movement will be exported. For Level Sequence export, you can choose to export either, both or neither of the Transform Track or the ControlRig Track. It really depends on your Use Case.

Next: “is it possible to fix the tracker and re-run if it gets confused?” . Not currently, I’m afraid. I will pass the request on to the MHA Product Managers.

Finally: “is there a recommended place to fix the animation?”. Let me take this away to be answered by a Technical Animator; I will try and find out what workflow(s) they typically use to do this in UE.

On your point regarding project size, this is a very valid concern. I’ll pass on your specific suggestion to the MHA Product Managers.

Finally, the smaller niggles … on the first point, I will pass the suggestion to the MHA Product Managers. On the second point regarding Capture Sources, whilst a Capture Source cannot currently point to multiple folders, it it recursive in terms of recognizing folder structure, so if you have a parent folder containing a bunch of separate take folders, the Capture Source will pick all of them up.

Just to summarize, if you could elaborate on how to reproduce the export workflow bug, that would be great, and I will also follow up later on recommended workflow for animation editing within UE.

All the best,
Jane

1 Like

Hi Tim. A bit more follow-up on the “where/how to do animation editing within UE” point. Having asked the technical animators, their recommendation is to export as a level sequence and edit the animation in the sequencer graph as opposed to the animation editor. That way you have your reference image plane, faceboard and graph. It should also adhere to timecode there too.
Hope this helps,
Jane

Hi Jane nice to hear from you and thanks for the quick reply.

All our MHA projects so far are using Performance Capture so any head and neck movement falls under the body performance coming from the Vicon system. I’ve therefore been turning off Export Head Motion for both sequencer and animation exports.
Unfortunately even using a helmet like the TP one we still see quite a lot of helmet wobble and face movement particularly for heavy action shots.

But upon closer inspection I think the outputted animation is identical. What I was seeing was the sequencer version being viewed on the identity mesh and the animation export version being viewed on the actors MH mesh. This meshes are quite different and cause some of the poses to look different. If I switch both to use the MH actor mesh they are identical.

Thanks for the recommendation on making edits in the sequencer. Is there a way to display the keys as curves rather than ticks in the timeline? It must be possible I just can’t see an option.

I’ve found the ideal export solution for us is to untick both control rig and transform track.
This means the head movement is removed when editing in sequencer and the control board stays static. Everything is slightly rotated and the image plane is at the wrong angle but this can be fixed easily enough!

The only slight issue is that the export sequencer options aren’t remembered. If UE could remember what you chose last time that would be fab and remove user error!

Thanks for all your help. I think this really is a revolutionary approach to markerless facial animation and the results are outstanding!

Hi Tim,
Apologies for the delay in replying; I have been on vacation.

Yes, that makes sense :-).

You just select one or more animation controls in the Face_ControlBoard_CtrlRig track and then click on the Curve Editor button (shown). Then you can edit in the Curve Editor.

Makes complete sense. I will pass this as a suggestion to Product.

All the best,
Jane