I have a character that I will be using in a short animated cinematic and it has a lot of morph targets to handle the facial animation. I noticed that if I try to layer different morph targets (as I would typcially do in Maya with blendshapes), say an eye closing with an additional cheek lift movement - the normals get messed up which causes lighting issues.
As far as I can tell, the normals are not recalculated in realtime (probably for performance reasons), and so my cheek lift morph target causes a geometry folding that’s creating these normals, even when the eye closing morph target “fixes” the geometry, the underlying normals still show it as if it’s folded. In Maya this doesn’t happen, and if I import a Static Mesh of just this pose the normals look fine, it’s the layering of morph targets that is the problem.
So my question is - is there a work around for this? Maybe a setting that I’m missing to recalculate normals in realtime? Performance is not an issue for me here because this is a cinematic that will be rendered to a movie clip. Or do I have to only create morph targets that don’t get layered in this way?
Morph targets are additive so if one target includes the vertices from another they will be added to the final pose. This seems to be the case so in this case the bottom should have it’s own target separated from the top. needs a top morph,bottom morph and a cheek morph none sharing vertices
Yeah, they are additive but that’s exactly what I want. Basically I have 2 targets, the eye closing and the cheek squinting. What’s happening is the squint on its own does not look great because it causes geometry folding without the eye being closed. So it seems like the normals are pre-calculated at import time and when I layer the 2 targets, the “folded” normals are being used even though the actual mesh geometry is no longer being folded. The gif shows this briefly at the end.
It seems like unless I can calculate the normals in realtime rather than have them pre-computed, I can’t use this technique of layered morph targets that “correct” each other and instead need to combine the shapes into a single target that can be used.
Any idea if there is a way to compute normals in realtime? I came across this post but have yet to get it working for my needs Recalculate Normals/Tangents at Runtime?