Can you comment on the roadmap for Deformer Graph?
In particular, can I take a dependency on it when developing my own features, and be confident that it will be supported into the future?
Thanks
Can you comment on the roadmap for Deformer Graph?
In particular, can I take a dependency on it when developing my own features, and be confident that it will be supported into the future?
Thanks
Steps to Reproduce
n/a
Hi,
Thanks for reaching out! Glad to hear that you are considering building features on top of Deformer Graph! We do plan on keeping supporting Deformer Graph as it is already serving ML Deformer, UEFN Metahumans as well as Groom. As a result, whatever new changes we make we will try our best to maintain backwards compatibility for existing assets. In addition, Compute Framework, which is the backend of Deformer Graph, has been fairly stable for a few years and is also powering PCG graph’s GPU related features.
Priority-wise in the near term the focus is to continue stabilizing Deformer Graph such that existing features work as expected on as many platforms as we can. Minor features may be added as demands from other dependent systems come up but we expect no major changes to the overall architecture. So you should be able to rely on Deformer Graph as a way to simply run text based custom GPU kernels on bound components via data interfaces and consider Deformer Graph Editor merely a UI layer on top of Compute Framework.
In the long term, Deformer Graph may be shaped by multiple teams so you may expect improvements related to better workflow integration with skeletal meshes via higher level composition of deformers and kernel fusion, more deformer graph functions related to groom deformation/physics as well as better debuggability.
In general, there are 3 layers where you can extend deformer graph.
1. Build your own deformer graph functions or deformer graph assets (strong backwards compatibility)
2. Define your own data interfaces for existing component source (Skeletal Mesh / Groom)
3. Define your own Mesh Component, your own data interfaces and merely use Deformer Graph as UI to write kernels for your components.
Obviously 2/3 may require minor tweaks when new engine version comes out but at this point we don’t have any planned changes that would require such tweaks since we just had one in 5.6 that addressed some long standing technical debt related to how thread index is exposed to kernels.
By the way, if you are interested in using Deformer Graph for physical simulation, one thing worth noting is that we currently don’t have a nice solution for maintaining simulation states across LODs because there is no mapping data between LODs and in some cases it might not even be possible to generate such mapping, at least for skeletal meshes.
So you might want to have control over what data is fed into the graph when designing your features if you are interested in type of problems.
Happy to go into details about specific features of deformer graph that you are interested in! And definitely let us know if there are features we can add to help you with your development!
Thanks,
Jack
Hi Jack,
Thanks for the answer, that’s great information and is what I was hoping to hear!
I’m currently working on a custom component with data interfaces and custom deformer graph nodes. As you say, I think I can feed the data I need in from the custom actor component via a component source binding.
One thing I couldn’t figure out is how to create a new deformer graph function - something like e.g. the DG_Function_LinearBlendSkin, a node that has its own shader code bundled inside, and could serve as a pre-built piece for a user to combine with their own deformer graph.
Is there tooling for making custom deformer functions currently? And do you know if there are docs for this?
Thanks,
Cormac
Apologies for the lack of documentation. To create a function, you need to
Worth noting that currently functions are immutable once created as we didn’t have time to add features to related to propagating function graph changes across assets that reference the function. And that is why you have to use subgraph as scratchpad for now to configure your function before it is turned into a function. However, immutability also helps guarantee that existing asset will never break due to function changes so at the moment it is a nice middle ground we settled on given the time constraints, but something we want to improve upon in the future.
Another convention we used is that each deformer graph asset should only have one public function such that users can use the reference viewer to more easily know which functions are referenced by their own graphs, indirectly via asset references. Here is a workflow we used to move function between two deformer graphs.
Hope this helps!
Jack
Thanks!
Cormac
Hi Jack,
I have 2 more Deformer Graph questions if that’s ok:
1)
Is it only possible to have one component binding in a graph? (or at least, in a node group in a graph)
Say I want to add a custom component with my own physics jiggle simulation, and make a component binding and data interfaces for the graph - is there a way to use that to affect the skinned mesh?
If I mix component bindings I keep getting graph compile errors of the form:
{quote}
Component binding for pin ‘Primary Group.Delta’ is different from the component bindings of the other pins in its group OptimusNode_CustomComputeKernel
Multiple bindings found for pin Position OptimusNode_DataInterface
{quote}
Ideally I’d like to combine my custom nodes with existing nodes like skinning, morph targets or ML deform - is that possible?
Or does my custom component need to package the skinned mesh inside it somehow?
2)
How do the component bindings under Deformer Settings in the Actor details panel work?
I’m applying the deformer on a skeletal mesh actor.
In the drop-down for my custom component binding I only seem to get the choice of Auto or SkeletalMeshComponent0. Will Auto choose my custom component?
Thanks very much,
Cormac
1)
You should be able to have more than 1, and to use data interfaces for two different components in the same kernel, you will need to create a “secondary input binding group”, which is right below “Primary Bindings” in the details panel. Pins in the same group should represent data from the same component.
2)
You can check the logic for how components are mapped in UOptimusDeformerInstanceSettings::GetComponentBindings(), as well as the UI logic for the dropdown in FOptimusDeformerInstanceComponentBindingCustomization::CustomizeHeader()
Basically the idea is that component bindings in a deformer can bind to any component in an actor. Firstly via the drop down if user specified the mapping, and secondly via component tags, which you can set on both the component binding details panel and the actual component’s in BP editor details panel.
Hope this helps!
Jack
That’s very helpful, thank you!
Cormac
Hi Jack,
Thanks for your help so far - I have another question, probably more about the future:
In Engine\Plugins\Animation\DeformerGraph\Shaders\Private\DataInterfaceCloth.ush, the cloth mapping has to be done once each for position, tangentX and tangentZ, where it would probably be more efficient to calculate the 3 values for each vertex in a single pass and then output them later.
The comment says:
{quote}
// Individual functions all call the same (expensive) code.
// There is the hope that the shader compiler may optimize here.
// But really we will need to work out a way for Compute Kernels to do some shared work once and carry the results of that in an opaque context struct.
{quote}
Similarly, in my kernel shader I’d ideally like to pre-calculate some data (say, an array of transforms similar to bone transforms) that’s shared across the per-vertex computations, to avoid duplicating work - but that doesn’t seem possible?
Question then:
a) Is there a way that I’ve missed that allows this sort of precomputation/sharing of data in a kernel (or between kernels)?
b) Is there a future plan for this kind of shared work, as the comment suggests?
Thanks again,
Cormac
I think the comment you saw in DataInterfaceCloth.ush likely won’t affect you given you have control over what output you want to expose on your data interface. Technically the cloth data interface can just provide a function binding like
FClothResult GetClothResult(int Index)
that directly gives user all three vectors, Position, tangentX and tangentZ, given that by computing one of them you get the two other ones for free.
The problem above might be different from the problem of precomputation, which I guess you are more interested in? In general when it comes to precomputation, we have the setup graph + resource that you can use to precompute certain data on GPU at the beginning and reuse those results later on in the update graph. Let me know if you want me to go into more details. What do you use to pre-calculate the array of transforms? Is the input data available on GPU?
You’re right, I think I’m talking about 2 sub-problems here.
The first is where I want to pass position and tangents to the Write Skinned Mesh node, but where it’s more efficient to compute all 3 at once as a struct.
Maybe I could solve that by passing the struct into a Custom Compute Kernel node and passing its output to Write Skinned Mesh?
The second problem is about precomputation of data for my own plugin’s Data Interface kernel to use.
I have something like simulated springs that I want to use to deform the skinned mesh.
I think I can pass the data representing these springs (as an array of particle positions and some parameters) into the deformer graph, using the Render Dependency Graph to allocate an RDG_BUFFER_SRV in my overridden FComputeDataProviderRenderProxy::AllocateResources().
Let’s say the springs themselves are simulated on CPU, and I’m just passing the results of the updated simulation.
What I’d like to do is calculate an array of ‘bone transforms’ from the array of springs, and then basically do a skinning computation for each vertex in the skinned mesh over these ‘bone transforms’.
As far as I can see I have to do one of:
a) in the kernel (which has a Vertex domain), recalculate the transform from each spring influencing each vertex on the fly - which would mean a lot of duplicated work.
b) precompute the transforms on CPU and pass those in the RDG_BUFFER_SRV
c) (possibly?) create a custom Domain for my springs, add a data interface node and kernel to the deformer graph that does that calculation, and have my main data interface node consume its output (the bone transforms)
Would (c) work?
Even so it’s also more complex than (a) or (b) since those only require my one data interface node (which would work similarly to UOptimusClothDataInterface).
Hope I’ve explained that in a way that makes sense?
Cormac
yeah about your first question, you can register a custom data type, even if it is a GPU only struct type, in which case you don’t need to supply it with a property conversion function, similar to how float3x4 is registered in FOptimusDataTypeRegistry.
About your second question, you are right about a,b,c and I think all of them can work with different trade offs. But it seems to me what you are doing is very similar to chaos cloth/flesh, where the simulation is on CPU, and you are just using deformer graph for binding the sim mesh to the render mesh. In that case it sounds like b makes the most sense? Basically once you are done simulating you should transform your data into a nice format on CPU before uploading it to GPU such that each vertex only need to process the ‘bone transforms’ that the vertex has weights for, just like Linear blend skinning.
Let me know if I am understanding your questions correctly.
Yes you’ve got it and that makes sense - computing the transforms on CPU is probably the simplest solution, then the GPU part is really just a sort of custom skinning.
Thanks!