Hi everyone,
Here’s some context:
We have a SkeletalMesh representing a humanoid body, which contains multiple morph targets, some for the facial expressions / lip synch, some are dedicated to full-face character customizations (e.g. one morph target will blend the default face to another face).
From our tests, we saw that we need a Pose Asset to fix the jaw movements during lip sync and the eye tracking movements (bone animations), since each face are so different they need to have their own pose (see screenshot below, representing the skeletons in their base pose).
The issue:
We notice when comparing the resulting blended face to the original mesh, that the normals of the jaw seem to be calculated from the position of the jaw before the pose asset calculations, as the normals look the same between the Skeletal Mesh viewer (no pose asset active) and the Pose Asset viewer (pose asset active and the normals are not affected).
[Image Removed]
Is this expected behavior ? or is there something to fix this issue ?
Many thanks,
Thomas
Here’s the image of the Jaw position from the side.
Left is the default face with its default bones positions, right is morphed face with the same bones positions (-> jaw is displaced).
The Pose Asset helps us correct the default bones positions of the morphed faces.[Image Removed]
Hi, it’d be useful to get a bit more information on your setup for this:
- How are you applying the pose asset - is that just being done via an animation blueprint?
- Do you have Recompute Normals set on the mesh asset?
- What’s the maximum number of influences that you have on the mesh in your DCC?
Thanks for the extra info on this. The pose assets (along with the anim bp) are bone-based solutions, so they don’t directly deform the mesh, affect the normals, etc, themselves. But if the skinning that is applied results in a deformation where the normals are invalidated, they would need to be dynamically recalculated. The engine doesn’t recalculate these by default without RecomputeTangents being set. This is actually a common problem when applying multiple morph targets that affect the same verts, but anything that deforms the mesh could result in the same kind of problem.
I would try enabling the RecomputeTangents property (on the problematic material sections in the skeletal mesh asset) to see if that resolves the issue. (Not the Recompute Normals property that I mentioned yesterday.)
Let me know how you get on.
Hi, it’s hard to tell without looking at the assets directly what’s going on there and whether the differences are due to the deformed geometry or the normals. One thing you can do however, is to test using deformer graph to recompute the tangents rather than the default skin cache implementation. There are various implementations with deformer graph which may improve results or help to track down the problem.
Can you try the following:
- Add the Deformer Graph plugin to your project
- With the skeletal mesh asset open, find the Default Mesh Deformer property
- Set that to DG_LinearBlendSkin_Morph and test how it looks in the viewport (the graphs live in the plugins folder so you may need to enable that filter option)
- this should be the similar to the default skin cache implementation without recomputing tangents
- Now try DG_LinearBlendSkin_Morph_Cloth_RecomputeNormals and see how that looks
- this should recompute the tangents in a similar way to the skin cache implementation
- Finally try DG_LinearBlendSkin_Morph_Cloth_RecomputeNormals_Scatter
- this implementation deals with the half-edge problem around mesh chunks that the previous deformer (and the skin cache implementation) doesn’t, so should give better results. However, if that was the problem I would expect to see obvious seams between chunks which I don’t see in your screenshots/video
Let me know how you get on. (If you’re still having problems, screenshots with each of the deformer graphs would be useful - or just the assets if you can share them.)
Good news, the DeformerGraph definitely fixes this issue.
Here’s some screenshots:
Default
[Image Removed]DG_LinearBlendSkin_Morph
[Image Removed]DG_LinearBlendSkin_Morph_Cloth_RecomputeNormals
[Image Removed]DG_LinearBlendSkin_Morph_Cloth_RecomputeNormals_Scatter
[Image Removed]I do not notice any difference between RecomputeNormals and RecomputeNormals_Scatter though, is there a performance cost between the two modes ?
Thanks again
That’s good news, and also interesting to hear that it behaves differently to the skin cache pipeline. The RecomputeNormals_Scatter shader code is somewhat more complex than the RecomputeNormals implementation, although it’s hard to say if there would be a noticeable difference in performance between the two. But if you want to err on the safe side and RecomputeNormals gives you what you need, you should be fine to go with that. RecomputeNormals_Scatter would be required if you ever see seams between mesh sections.
Unfortunetaly, as I mentioned in my previous message, we have incorrect results with RecomputeTangents.
See images below, left is no morph target, right is with one morph target.
As you can see, the lighting on the face is not correct, especially on the left-side of the face (which is on the right side of the screen to be extra sure haha).
With RecomputeTangents
[Image Removed]
Without
[Image Removed]
Here’s a video I made to illustrate the issue with RecomputeTangents.
I’m moving a MorphTarget back and forth, at first without Recompute and then with.
Hi, thank you for the suggestion, I’ll investigate !