Morph target performance question.

Hello everyone,

I recently started working with morph targets in Unreal for face animation and I’m liking the results a lot.

Is there a performance (memory usage) difference between using morph targets instead of bones for some deformations? Particularly for complex deformations that would require several bones to achieve?

Thanks in advance for the help.

Using straight raw morph targets for facial animation has a very high per player model cost as well I suspect that performance can be effected as well.

Problem number one is vertex based animation (aka morph) is each character would need the same number of vertices, and more important matching id’s as to their location in 3d space, so for each unique character you make you would need to make it’s own set of matching targets.

So

You can not recycle the same animation sets across a broad range of different characters.

As well

Questionable is the ability to be able to pre-hardware render morph targets as they are usually generated run time and I you can not the amount of data could become massive. I once calculated the total cost for 16 characters, with LOD, to about 16 gig worth of raw point to point data as compared to about 10 megs using clusters.

Marker or clusters by far has a lot lower resource hit and the cost further amortized over the total number of player models that you can reuse the animation sets and like everything else animation wise in Unreal 4 can be layered as an additive and not as an absolute so it all fits into a single animation pipeline with out having to deal with the “special” requirements of morph and targets.

Hi ,

Thank you so much for the response. I’m sorry to hear that morph targets don’t perform (computing wise) as well as joints. I’m surprise to hear that too because in other software packages like Maya they actually perform faster than bones.

Thanks again for taking the time to share your knowledge.

Cheers!

Well corner stone apps like 3ds Max and Maya are not very good when it comes to what would be considered real time playback as they don’t really take advantage of hardware rendering and does not take much of a load to slow things down.

If you are more interested in the animation side of games development they are not what I would consider ideal tools as they are design to cover as many bases as possible and don’t really do one thing insanely great.

The tool of choice, be it player animations or lip sync and in my opinion, would be Motion Builder

Yeah, I use Motion Builder for body animation, it’s very fast, although not without its quirks, but I guess the same can be said about any software package.

I have animated faces a couple of times in it, mostly morph target based. Morph targets are pretty fast in MOBU although I must admit I never compared them with bone based system either.

Thanks again for sharing your knowledge.

Cheers!

Sorry to hijack this thread but I have a related question… basically I need to create a facial animation system for the main character (only one), but perhaps more importantly, for the hundreds of random john does of the world.

I am afraid that having hundreds of characters with a fully fleshed out facial rig would be a significant performance hit as opposed to using blend shapes. Notable though this also includes customization of the john does to make sure they don’t look identical.

Our rig guy told me that the eyes and mouth need to be rigged and that the rest can be customized with blend shapes.

All of that being said, what is the best approach to something like this? Customizable NPCs with facial animations. Obviously each race will have male and female variants.

Well NPC’s is a totally different subject as compared to what would be considered a “hero” type model by design so for this purpose full body morphs would be a very effective use of resources but design wise your getting into making the next Assassin’s Creed territory that would require a rather extensive design document beginning at the top down.

As a starting point though to test theory as to practical application Daz Studio is an excellent try before you buy anything application as it already has key features to jump start the testing phase with out a lot of messing around remodeling but if you want to go with what you got morph targets are additive meaning you can change the shape of the head and it will just added the other targets on top as a +1.

The big question though is do your NPC’s need to act (ie dialogue) or just need to fill the back ground with using a few key expressions?

They need to act AND have customization. There have to be many different NPCs on the screen at the same time and performance is a concern.

Blend shapes take up more memory but if I’m not mistaken the cost is “paid” only once for all the NPCs while a complex customization + animation facial rig would incur runtime cost for every NPC.

Well the big question is “how does UE4 manage morph targets” as traditionally to maintain performance the mesh containing the shapes would have to up load each frame of the morph as it’s being rendered. I do know with past engines this is an issue but I don’t know…yet…if UE4 handles the load as an instance copy. (it’s on my list to try). If morphs is instanced then you can have hundreds of characters.

But

The need for digital people (NPC’s) has been around forever and there are already turn key software solutions, Daz Studio being one of them.

For example you can do this.

Yeah but Daz is not an option since we aren’t talking about humans. The biggest issue really is the lack of examples really. I’d pay to see how e.g. Skyrim did this.

There are just so many problems - I’d need to do morph targets for every LOD (Although I’d only 3-4 for LOD1 to maintain general face shape and just use a singular mesh with no blend for lower LODs).

But then, if I use morph targets for customization, does that make it impossible to use them for animation?

That would mean that animation needs to be fully skeletal which again… dozens of chars with a complex facial skeleton.

And if I do that I might as well do customization with a skeleton additively as well.

Am I overthinking this?

Not really… more like chasing your tail. The tech is there to do it but not like something that you can buy off the shelf at S-Mart so if it’s something that you really want to do, based on your teams current design direction, then you really need to sit down and map it out and break things down into smaller chunks instead of looking at it all as a sum of all it’s parts. Coming up with the design documentation is not something that you can do at lunch on the back of a napkin. :wink:

Nope as I said morphs are additive meaning only the offset is recorded and not it’s absolute location as to world space so you can add morphs to a skeletal rigged character be it a Zombie or a 400 pound fat guy and it will work just fine.

Well depending on what applications you have access to there are ways of being able to transfer the main targets to the LOD. A bit time consuming, maybe a few hours to a few days, but no need to redo them all from scratch.

Keep in mind.

This is AAA studio stuff but can be done if you plan it all out ahead of time.

Well now at the inception stage I really need to know some basic things like:

-Are facial rigs expensive when they “aren’t doing anything”?
-Same question but with morphs. Do morphs even have a runtime cost while they’re just sitting there and not blending?
-Is there any documentation, any example of a AAA game that did randomized NPCs with facial animation, like e.g. Skyrim.

-Are facial rigs expensive when they “aren’t doing anything”?

No more than whats allowed for the actual character model’s rig. It’s a rig that can share the same skeletal pathway as the character keeping everything into a single channel including the animation sets. Clusters are way less expensive per vertex, easier to drive using voice or optical data, and it’s memory footprint is 1-1 no mater how many characters you add.

-Same question but with morphs. Do morphs even have a runtime cost while they’re just sitting there and not blending?

Of course. For each vertex that has a +1 it needs to be recorded as part of the morph progression and stored so although not moving it’s memory foot print can become massive and if for some reason the main target losses a vertex your entire morph chain will break. More or less if you have 70 shapes you have 70 times the load as you would have using clusters.

-Is there any documentation, any example of a AAA game that did randomized NPCs with facial animation, like e.g. Skyrim.

Well that’s trade secret stuff but I doubt they use morphing for anything other than the hero stuff but looking at it it seems to me Skyrim is just using the talking mail box approach. Cheap lip sync by taking the dialogue track, converting to rotation data and tada talking characters. In other words procedural and not keyframed.

To bug you one last time. :wink:

Morphing animations is 1999 stuff and very expensive and high maintenance and extremal difficult to dive. It’s what Quake3 used to animate their MD3 player models and even though low poly account for the majority of the games over all foot print.

Clusters on the other hand are dirt cheap by comparison, can be driven procedurally, the animations reused on different characters, and keeps everything moving through the same animation pipeline with out the need for any special code or blueprints. Clusters is just another layer blend like aim offset.

If your looking for someone to tell you what to do I’ll say don’t use morphing for dialogue as it’s way to messy that takes a lot of planning when the ability to use clusters is already available as just another blend state.

This would be the ideal solution were it not for the fact that we aren’t just talking about animations but face randomization / customization as well. That should be doable with a simple additive animation and a few bones though…

This pertains so I’ll resurrect a 4 year old topic…

Has anyone tested large mesh morph performance out?

I’m curious to know if it’s more or less performant then dynamic tessellation.

Tessellation happens GPU side.
Morph targets hopefully do not (or it would defeat my purpose, but it’s worth a try).

Let’s assume a landscape lod0 tile of vertices being affected by the morph.

I’ll probably test this out tomorrow …

if you guys remember the Cave game, I’m thinking it would be possible to throw 20 or so morph targets on the same mesh, and make an endless runner with procedural randomness (as you can randomly offset the morphs between 0/1.
as an added bonus you can literally bring the ceiling down onto the player in real time…
Ofc. That’s assuming complex collision with the mesh just works and that it doesn’t just kill the performance to have a few thousand vertices affected real time…

…and again in 2020.

Can we get an Epic dudess/dude to comment on morph target performance. Just a couple of examples. Like a character made of 10000 verts having 10 morph targets with 1000 vert coverage on each morph target. Good? Bad?

The only way to know is to bench it yourself on your project since the overall scene complexity matters.

as far as I know and anyone can tell, working material shaders to shift vertices is much less expensive and way faster then it is to use skeletal meshes and animations.
meaning if you have a crowd of 1000 people, each with full skeletal rig, you get very poor performance.
if instead you convert the animations to some sort of vertex shader/movement you can still have very good performance.

Blatant Example of this would be Auzu’s fish schools. Look up their videos for talks on it and on how they do it.
Keep in mind its not Exactly the same as a morph target - though it is in essence. You manipulate the vertices position GPU side.

What you would need to bench however is the actual cost of shifting the morph value from 0 to 1 on multiple different characters at once.
Each would have to be imported as a differently named thing in order to prevent automatic instancing.

I would start with 1 model doing something simple like a blink, import it 20 times with different skeletons, and coding the level to shift all 20 morphs, either at once or though a for each loop.

With that you can sort of find out what the cost is, specifically for your project, and determine the radius in which you will be animating multiple morphs.
iits most definitely a lot of work.

I guess that we talk about CPU performance…? You can use GPU if needed, but by default it’s calculated in CPU:

(“Optimization” section)
… So it’s possible to change that if you’re CPU bound.

BTW: Does a large amount of morph targets impact performance if they are not currently used (by changing the morph value)? Or maybe we need to worry only about actively calculating morph targets…?

So every time I see a question formatted like this I have to ask with in what content? Morph targets by themselves does have a percentage loss in performance as with any other type of usable asset and it’s more of a factor as to what purpose morph targets needs to serve as would any other elements added for what I assume would be a playable game.

For this example the problem as to the X-factor is 10000 vets is not all the heavy a load as far as testing goes and the nature of a morph target is “only” the vertices that move as an offset are recorded so on an object with 10000 vertices only 100-300 might move and are cached as part of the asset.

That being the case then sure there is a drop in performance, just like complex shaders, and of course one should be concerned about overuse of a good thing, like a world full of particle effects, but if formatted with in a context of some kind one can worry less about a thing in general and focus more on usable techniques the serve the purpose.

Think my Ferrari will always beat my old pickup truck to the finish line but what would be more useful if I’m helping a friend move :wink:

If you own a ferrari you probably have no friends - let alone one that needs moving :stuck_out_tongue: