I’m trying to learn deformer graph and getting stuck on a basic example. I’d be grateful if somebody could help with below questions.
Question 1:
As a first learning example, I’m trying to adjust the BlendPositions kernel to copy vertex positions between two cylinders. I’m running into issues trying to load my Cylinder SKMs as skeleton mesh bindings. Does deformer graph require that meshes are in the same blueprint that should interact? See attached images [1,2]
When I add my two cylinders into a Blueprint as “SkeletonMesh” and “SkeletonMesh_driver” and assign my deformer graph I get a broken result. I’m not sure what the issue is, all the kernel is doing is query position and set it on the second cylinder. See attached images [3,4]
How does deformer graph treat the “rest mesh/original mesh” and “current mesh” concepts? Are there different components bindings or nodes to request the mesh at these states?
I have Maya background which uses these concepts, and I’m wondering if there is something equivalent in deformer graph.
Open Level “L_copy_points_test” (/Game/Characters/Cylinder/L_copy_points_test.L_copy_points_test)
Open Level Sequence “LS_copy_points_test”
Play the sequence. Note “SkeletalMesh_driver” is not affecting “SkeletalMesh”. Both cylinders deform independently
Select “SkeletalMesh” and enable Mesh Deformer “DG_copy_points_my_v1” in Details. This results in the cylinder mesh exploding.
Edit: I tried to upload my zipped project but the ticket submission website produced an error saying format .zip is not supported. I’m not sure how to share my zip file.
Before I dive into the questions, I want to mention that we have recently identified a few thread safety issues with using secondary skeletal mesh components in Deformer Graph that can lead to inconsistent behaviors / crashes. The fixes for these issues are non-trivial and thus won’t be included in the upcoming hotfixes for 5.7. So unfortunately I would suggest sticking to single skeletal mesh component use case until we address those issues in 5.8.
With that said,
About Q1, Currently a component bindings in a deformer graph can only bind to other components in the same BP Actor, either via direct assignment or matching component tags.
About Q2, your setup looks good to me, but you are probably hitting the thread safety issue that I mentioned earlier unfortunately, we will address it in 5.8
About Q3, “Skinned Mesh” node represents the original mesh / mesh asset, while “Read Skinned Mesh” node represents the “current mesh”, which is how we are able to chain multiple deformer graphs using control rig. ( Worth noting that reading the current mesh state of a secondary component is not yet supported)
“Worth noting that reading the current mesh state of a secondary component is not yet supported”
This is a serious limitation for our project needs. Will this be addressed in 5.8?
Our current project plan is to copy complex deformation onto our metahuman Body and Face component from a secondary mesh. The secondary mesh carries ML deformation (proprietary) or using Epic’s ML deformer.
What we are trying to get out of deformer graph is to retain the Metahuman face rig, and it’s rigging of blendshapes etc.
If we cannot rely on the secondary mesh to “transport” this deformation is makes updating the metahuman more difficult for us.
Thanks for clarifying your use case! It sounds like you might need more than just reading from a secondary mesh component, which I assume is a body+face combined mesh that you used for MLD ? If that is the case, another complication here is that because the primary mesh (body or face) does not have the same vert count / vert ordering as the secondary mesh, direct vert to vert copying based on thread index won’t work without some type of offline-generated vertex index mappings from the secondary mesh to both primary meshes. Am I interpreting your use case correctly?
We have proprietary tooling in place to copy mesh correctives from our whole body mesh to separate face and body SKMs, let’s call it “ML Body SKM” and “ML Face SKM” for the purpose of discussing this ticket. So we have the same point count and order between these meshes.
This tooling lives outside Unreal but we are verifying the results are as expected before we copy the deformation into the metahuman structure, to the body and face component.
The “ML Body SKM” and “ML Face SKM” would be secondary components in the context of deformer graph. The deformer graph should use them and copy their current point positions, found by vertex index, to the Metahuman Body and Face component respectively. That’s what I was trying to achieving by using the BlendPositions kernel example.
This copy points workflow is a VFX workflow we hope to replicate in Unreal. It will improve quality and save build cost for us. This is for a linear content project with a small realtime portion. Runtime speed is secondary to us.
I see, your tools can do the heavy lifting! In that case you can potentially do the skinning deformation of ML Body SKM in the same deformer graph that does the Copy, so something like,
in a single deformer graph, do: WriteSkinnedMesh(Body, BlendPositions(LinearBlendSkin(Body), weight, MLDeform(MLBody))). I can give this workflow a test once the thread safety issue is fixed. Do you think this can work for you?
[mention removed] a single deformer graph would work if we had this data interface
MLDeform(MLBody
But we don’t have such an interface currently.
Our ML deformer is Ziva’s RT deformer and it’s exposed to Unreal as a “zivaSkeletonMesh” type only, with no interface to deformerGraph. Since we own Ziva RT we can consider implementing a data interface. But before doing that our thinking was to prototype the workflow using a secondary skeleton mesh and copying deformation. I hope that makes sense.
I’m still not entirely clear if “current mesh” access will be supported in 5.8 to a secondary mesh, and how much of a hurdle this is for you team.
Do you think we have to move towards building a data interface for Ziva RT, rather than the current more “shallow” SKM interface?
And second question, it looks like Epic already provides a data interface for it’s ML deformer in the ML deformer sample. Would we be able to use this if we move our ML training to Epic’s solution?
One challenge we have about our project is that Epic is not providing a full example of
-Metahuman
-ML deformer for Body and Face component
-Chaos cloth
-Chaos hair
so we are discovering piece by piece which pieces work together (or don’t work together) which is affecting planning. I can’t stress enough how much a full working example would help ambitious character work in Unreal.
I am curious, how is “zivaSkeletonMesh” related to the secondary mesh component in DG? Is the secondary skeletal mesh writing its outputs to FMeshDeformerGeometry on FSkeletalMeshObjectGPUSkin via this custom deformer? And is it using unreal’s ComputeFramework? To copy the deformation result of one mesh to another mesh, their deformation kernels need to run in serial due to data dependency(secondary deforms first, then primary copies from it), which is certainly easier to establish explicitly within the same deformer graph. But if the kernels are dispatched by different systems that don’t know each other, it is a lot trickier to enforce that dependency due to the parallel nature of skeletal mesh rendering.
“I am curious, how is “zivaSkeletonMesh” related to the secondary mesh component in DG?”
The attached image shows our Metahuman blueprint.
Currently the zivaSkinMesh is its own component in the metahuman blueprint. It functions like a regular SKM but has “ziva rt binding” component. I’m not sure of the implementation details, just explaining how this is exposed currently to users.
The image also shows “Face”, which is the regular metahuman face component. We haven’t been able to add the “ziva rt binding” directly on that due to a mismatch of joint topology in our ML training. That’s something we know we need to fix, and then maybe adding the ziva rt binding there would also unblock us.
Conceptually, and from a production standpoint, I really like being able to copy points between parts of a character because the “overwrite deformation” way of working is often easier in production than the ‘surgery approach’ where you have to modify deformation of an existing mesh. Copying points also lends itself to having complex subrigs being developed by separate TAs, like a complex ML and simulation based hand rig that is copied to a Body component. I hope deformerGraph will support these advanced deformation workflows.
I see, it is good that it looks like ZivaSkeletalMesh derives from SkeletalMeshComponent, however I need to know the path it is taking to deform the mesh to know how and if it is even possible to extract its position buffer for reading in DG. Could you check if Groom can be attached to the ZivaSkeletalMesh correctly (the root of strands are bound to the Ziva deformed surface instead of linear blend skin surface)? Or if you can sample the Ziva deformed skeletal mesh surface in Niagara? Can you append additional deformation to Ziva by adding a Deformer Graph using “ReadSkinnedMesh” to ZivaSkeletalMesh’s MeshDeformer property? That should give me some clues about the feasibility of this workflow.
BTW, as a reference, for UE’s ML Deformer, the render command that produces the delta buffers are explicitly enqueued during ML Deformer Component tick, which allows Deformer Graph to enqueue a render command to access the completed work later during EndOfFrameUpdate phase. If Ziva is doing something similar, you can build a simple data interface similar to UVertexDeltaGraphDataInterface that allows DG to simply access the Ziva delta buffers and apply them to any component. The key here being that “TickComponent” and "EndOfFrameUpdate " are distinct phases in series so that we can easily guarantee safe access.
“I am curious, if you add a simple deformer to the ziva mesh to have the mesh shift in Z for 50 units, using the read skinned mesh DI (data interface) and and write skinned mesh DI, what happens in that case? Does it preserve the Ziva artifacts?”
[mention removed] I tested this out and made this offset graph
Adding such an ‘offset’ deformer to the ziva mesh results in the mesh being locked off to the initial mesh state + offset applied. The skincluster/LBS or ZRT correctives are not coming through. I hope that gives you some insight on how the current implementation works.
I’m in the process of fixing the body and face training to remove the artifacts . Once that is done it will be more clear if we still have integration issues with chaos cloth, hair or even the face blendshapes / face rig.
Thanks for the update. So looks like Ziva is not using the same position buffer that Deformer Graph uses to store deformed result. Two other things to try.
What happens if you use this deformer “/DeformerGraph/Deformers/DG_LinearBlendSkin_Morph” on the Ziva mesh, do you still see ziva artifacts?
What you if use console command “r.SkinCache.Visualize Overview”, What color do you see on the ziva mesh? Red = not using skin cache (meaning it is either using deformer graph or vertex shader for skinning), green = using skin cache.
Yes correctives with artifacts coming through in this case
Showing red color. The Face mesh (uses zivaSkinMesh without deformer) shows green color
There is also a “Render Method” attribute on the zivaSkinMesh that can be set to “Ziva”. This uses Ziva’s LBS implementation and bypasses Unreal by looks of it. In this mode the skinCache overview color is GREY.
nice, one last thing to try, what happens if you use “/DeformerGraph/Deformers/DG_LinearBlendSkin” on Ziva mesh? Does the ziva artifacts go away? If so it would mean that the Ziva MLD actually computes deltas in the form of morphs, which you can directly read in the non-ziva mesh’s deformer graph with a kernel that looks like
if (Index >= ReadNumThreads().x) return;
// The primary group of your kernel is the non-Ziva mesh, the primary group has the primary component directly wired to the primary group pin + 3 output pins
// The secondary group "ZivaMesh" should be bound to data interfaces that referecens the Ziva Mesh, and the group should have 6 input pins
// run LBS_Morph on the Ziva Mesh
float3 LocalPosition = ZivaMesh::ReadPosition(Index);
float4 LocalTangentX = ZivaMesh::ReadTangentX(Index);
float4 LocalTangentZ = ZivaMesh::ReadTangentZ(Index);
float3 DeltaPosition = ZivaMesh::ReadDeltaPosition(Index);
float3 DeltaTangentZ = ZivaMesh::ReadDeltaNormal(Index);
float3 MorphPosition = LocalPosition + DeltaPosition;
float3 MorphTangentZ = LocalTangentZ.xyz + DeltaTangentZ;
float3x4 BoneMatrix = ZivaMesh::ReadBoneMatrix(Index);
float3 SkinnedPosition = mul(BoneMatrix, float4(MorphPosition, 1));
float4 SkinnedTangentX = float4(normalize(mul((float3x3)BoneMatrix, LocalTangentX.xyz)), LocalTangentX.w);
float4 SkinnedTangentZ = float4(normalize(mul((float3x3)BoneMatrix, MorphTangentZ.xyz)), LocalTangentZ.w);
// Take the Ziva Mesh Deformed Pos and write to non-Ziva Mesh
WriteOutPosition(Index, SkinnedPosition);
WriteOutTangentX(Index, SkinnedTangentX);
WriteOutTangentZ(Index, SkinnedTangentZ);
Now you may again run into the threadsafety issue ( but I don’t think it is 100% repro when you first set it up, it crashes more consistently after a engine restart when opening up the character BP again.
I am not sure if you have access to our P4 sever but if you can, you can take a look at shelved CL50618108, which contains a rough draft of a workaround that you have to apply to all data interfaces (C++ code changes) that you will use in your graph to avoid that threadsafety issue.
what happens if you use “/DeformerGraph/Deformers/DG_LinearBlendSkin” on Ziva mesh? Does the ziva artifacts go away? If so it would mean that the Ziva MLD actually computes deltas in the form of morph
The correctives are still active when I do that. Looks like correctives are not applied as morph, but some other buffer.
Edit: When applying the “offset” deformer I can see the correctives still play back as the joint animation is playing back, so the ML inference is active and causing deforms.. It is the same deformation as if the correctives were applied as morphs, I think, The implementation just seems to be different.
I’ll try the kernel example you shared, just in case it works. Thanks for all the helpful suggestions.
Interesting… but with the offset deformer graph earlier you were able to override ziva mesh with rest pose + offset, if I understood correctly… what if you use a deformer graph that does LBS+Morph+GlobalOffset? Do you still just observe pure Ziva deformation with no offset applied?
Looks like Render Method cannot be set to Ziva for our purpose, what other options does it offer? And what color does each show?
Just to clarify, is there a case where the Ziva mesh is showing both the ziva deformation and green color in skincache vis? And is there a case where Ziva mesh is showing both the ziva deformation and red color?
with the offset deformer graph earlier you were able to override ziva mesh with rest pose + offset, if I understood correctly… what if you use a deformer graph that does LBS+Morph+GlobalOffset? Do you still just observe pure Ziva deformation with no offset applied?
I think what I said in an earlier post is not correct. This part:
Adding such an ‘offset’ deformer to the ziva mesh results in the mesh being locked off to the initial mesh state + offset applied. The skincluster/LBS or ZRT correctives are not coming through.
I looked at this again and in fact, adding the ‘offset’ deformer does not lock the zivaMesh in its initial state. It merely removes the default LBS from the deformation. Playing the timeline on the zivaMesh i can still observe the correctives being trigged with joint animation, the offset deformer is also working. I can interactively change the offset , too. So both Ziva correctives and DG are working. What this suggests to me is that we can add the zivaRtBinding component directly to a Metahuman component and get ML deformation that way. We don’t need a second mesh to copy point positions from. I’m close to having working ML training so can finally try this.
Just to clarify, is there a case where the Ziva mesh is showing both the ziva deformation and green color in skincache vis? And is there a case where Ziva mesh is showing both the ziva deformation and red color?
The Ziva mesh is always shown GREEN if no DG is active. I think I gave you the wrong response on this before because it turns out the “Always use Mesh Deformer” checkbox overrides the “Mesh Deformer” checkbox. So you can disable “Mesh Deformer” and think the DG is not active anymore. But if “Always use…” is active the DG is still active. This is counter-intuitive to me, especially since the Mesh Deformer drop-down is greyed out.
I’ll report back if adding Ziva to the Metahuman components directly works. If this works we don’t need to copy positions from a second mesh. I like this as a VFX technique, but understand application in the realtime kernel world isn’t trivial so we need not spend too much time on this. I’ll report back soon.
[mention removed] I’ve been able to apply ZRT directly to Metahuman components so we can close this ticket.
Thank you for your guidance and tips abou tDG, it’s been super helpful even though in this case we aren’t using it.
I hope DG will be continued to be developed, including support for secondary meshes to support a ‘copy points’ type deformer setup. I love this way of working and hope eventually it can be replicated in UE .