Hi! Thanks for reaching out! Could you clarify what type of deformation are you interested in achieving? And are you looking for a solution to alter the face of a skeletal mesh during gameplay or something more akin to a face creator where you want to bake the resulting face into new skeletal mesh?
Deformer Graph does have a morph target data interface, where it outputs the vertex offsets of all active morph targets combined, you can search for “LinearBlendSkin_Morph” in Deformer Graph plugin’s content folder to find some examples.
About “vertex offsets based on updated animations”, currently a morph target is assumed to record the per-vertex difference between the base mesh and the morphed mesh in the mesh object space, so they are applied before skinning by default (See DG_LinearBlendSkin_Morph). But ultimately these are simply per-vertex float3 values so you can use it however you like in the skinning kernel, so if you want to use these vertex offsets differently, you might need to build custom tools in your DCC to generate vertex offset values in a different way, which might allow you to achieve something like tangent space blend shapes that can be applied after bone skinning. And in this case, your vertex offset has to be recorded in the tangent space and that tangent space has to match how unreal computes the tangent space.
This may or may not help you but if you really need to customize how the skinning matrix is computed, you can use set animation attribute node in a post process control rig to attach any numeric values to each bone and just write your own skinning algorithm. You can retrieve the per-bone animation attributes from the Skeleton data interface similar to how a bone matrix is retrieved. Feel free to take a look at DQ_DualQuat asset, which contains a some example code for how you can loop through all skinned bones for the current vertex.
I have tried according to your method, but this method can’t solve my problem at present because the Deformer and the skin vertex deformation are rendered together.
As you can see in the video, when the eyes follow the head deformation, the vertex offset controlled by the eye bones changes.
I want to solve this problem. Could you provide some more specific suggestions or cases?
Since metahuman facial animation is mostly based on bone positions, you likely will need to change bone transforms to match your morph target in some way. Deformer graph doesn’t allow you to change bone transforms, it can only read bone transforms and other bone related custom animation attributes (where you can potentially compute some type of offset using a custom control rig node) and deform vertices using your own custom kernel as none of the built-in deformer graph would do what you want here. However, computing such offsets and passing them through to the deformer graph as animation attribute is essentially recreating & retargeting the facial skeleton, which is can turn into something complex easily, so it might not be the right tool for the job. You might notice that it is not just the eyes that have the deformation issues, you might also need to adjust for lip area as well.
UE’s Mutable system on the other hand does offer some in-game bone-adjusting functionalities that may or may not help. Mutable can adjust selected bone positions following mesh morphs (Mesh Morph node) or following a procedural morph (Reshape Morph node). But given metahuman’s riglogic (bone deformation system) being a blackbox, the current bone movement formula might not exactly fit your needs, so you might have to tweak it to your specs. Is it something that you are already using? Did you find any limitation with it that is blocking you from achieving what you want that motivates to use deformer graph?
Mutable currently has many issues and cannot properly deform the skeleton.
I have already contacted other technical staff for support in another thread.
The reason for using a Deformer is that it allows deformation to be applied as a post-process, meaning the vertex positions can be modified again after the skeletal matrices have been calculated and rendered.
In this way, we can apply deformations that affect the skeleton after all animation calculations on the vertices are completed, effectively performing a second pass of vertex processing.
Hi! While deformer graph does indeed allow you to apply post-process deformation after skinning, it also limits what type of deformation you can perform as its input vertex position is no longer constant so a morph target “deformer” which requires the input mesh to be the rest pose mesh, wouldn’t behave the same if it were applied after skinning. You can test it out by moving the 2 lines related to morph target deltas in the LinearBlendSkin_Morph deformer function to be after skinning calculation (the lines that contains “BoneMatrix”), either in the same kernel or use a second custom compute kernel connected to that deformer function. (you can right click the node and “Expand Collapsed Node” to localize the function and access/modify its kernel source code.)
Another quick experiment you can can try with deformer graph is to use a spatial deformer like a lattice deformer to apply the post process deformation. The engine ships with a Lattice Deformer in the animator kit plugin and this youtube video shows how you can use it (https://youtu.be/6HAiRfRc5bQ?t=333). In sequencer, you can constrain/space switch the lattice deformer to the head bone such that the deformer follows the head. With this setup you should be able to see that the metahuman facial deformation is applied first, and the lattice deformation is applied on top as post process. It is similar to this setup (https://youtu.be/PDz4bvP2MG0?t=2360), where the deformation is applied on top of dragon’s facial performance . But this only works as it is a deformer that maps a position in world space to a different position, instead of mapping a vertex’s position in the rest pose space to a another position in the rest pose space, which is what a morph target does. Hope this helps demonstrate what deformer graph is capable of and may give you some inspiration. Here is also a thread on how you can integrate this type of deformer into your character directly, instead of having to rely on sequencer (https://forums.unrealengine.com/t/is-there-a-way-to-bake-deformer-graph-animations-in-level-sequencer-into-an-animation-sequence/2298398/11?u=jack.cai).
I would be happy to answer any questions you have around how you can pipe custom data into Deformer graph if you decide to prototype a custom solution using deformer graph. The lattice deformer uses control rig + deformer variables, but if you need more customization, you can even build your own deformer graph data interface by referencing existing data interface nodes like OptimusDataInterfaceAnimAttribute.h