The new 3.0 MetaHumans: 40% performance gains? Tutorial inside.

Quite disappointing if that is the matter. In the near future I will dive into this topic because I will be depending on mocap libraries for a part. Using a library obviously it would need to be matched with a skeleton, so people expect a modern “standard” that is somehow also game ready. Anything else is a ton of work.

I have already researched markerless mocap for custom / procedural rigs, with nothing but 1 cheap webcam, and concluded even modern AI assisted solutions (DeepLabCut) expect synced camera recordings from multiple angles to use as source for 3D pose estimation, where I once assumed that to be just estimated from skeletal data + a single camera… Meaning DIY mocap in 3D isn’t quite there yet, especially when you work with animals or custom objects!

Well a good modern example that high detail animations is possible in games would be the Horizon series, or Beyond Two Souls mocap (face too!!) that was released in 2013!! Different engines of course.

That would be context specific. Feel like there is performance to gain at 0 quality loss if you could procedurally remove detail based on context. Say when you want to attach clothing you’d optimally want 1 skeletal mesh adapted to how much detail you can see with gloves on (even that can be problematic). At 30 meters away it would still look weird when someone plays tennis without animated fingers (let’s say open hands, default pose). It’s very much context and resolution based.

1 Like