Mutable: More Parameter

[Image Removed]

Please add new nodes such as String and GameplayTag, and allow them to be modified externally.

For example, if I need to dynamically modify 100 deformations now, I have to create 100 Float nodes, which is very complicated. In addition, I find it difficult to create custom nodes and hope that this process can be simplified.

Hey there,

Thanks for the feedback. There is the concept of a morph stack definition that you could use that does what you’re asking for.

https://github.com/anticto/Mutable-Documentation/wiki/Node-Mesh-Morph-Stack-Definition

And in the sample it looks something like this:

[Image Removed]

The idea being that while you need to make floats, that can be external facing. You can pick and choose then edit the morph stack in one whole go.

Hope this helps,

Dustin

Hey all,

Yes, there have been a few requests for this. Regarding the original issue, you could add some code to the plugin to only expose the pins for the morph targets that you want to see. You would do that by extending and building a CustomizableObjectNodeMorphStackDefinitionDetails class.

In terms of more parameterization, the team is working on dataless assets. The idea is that the graphs become more like function containers that run operations on meshes, which then become inputs to those graphs. As we move towards that functionality, we have opportunities to improve those actions.

“Is there perhaps a tutorial or example of how to make new nodes that add these piggy-backed parameters to the CustomizableObject model”

This is a good idea, we don’t have anything like that today, but it’s something we can do a write-up on in the future.

Dustin

It is not, and ultimately, it is dependent on your skeleton. If your feminine skeleton is not the same as your masculine one in its proportions, then you can decide. Mesh switches can be a good path, the other option is that you could use a data table with each row coinciding with your body type. The path you choose typically is based on the amount of content you are needing to manage. For example, if there are two of everything for each slot on your character, then a table is a better way to go. If it’s just the body and all of the clothing works for both masculine and feminine, a switch might be better.

If your proportions are not the same, you need to adjust the reference pose. So, for whatever path you choose between a data table or mesh switch, you will need to create a pose asset that represents the offset in your reference pose from the mesh you decided was the primary mesh in the mesh component reference mesh, and place it just before each mesh section call. I did a talk about this at Unreal Fest in Bali that will be on YouTube in September.

Dustin

I might not be fully understanding the question, but on the Add To Mesh Component and Mesh Component nodes, there is a LOD strategy. https://github.com/anticto/Mutable-Documentation/wiki/Node-Mesh-Component#automatic-from-mesh, Automatic from mesh being the default. If those settings are not set up similarly on your child objects, you could run into issues. Additionally, the differences between LODs in the Metahuman setup can be substantial, so there may be issues there that could be causing bugs. If you have a repro example that you want to forward along, we can check that and fix any bugs.

When you say the base section of the face has a material missing, is it the main section of the face?

Hey there,

Apologies for the wait, I needed to spend some time looking into the issues. I’ll do my best to answer each of the questions/statements in order. The first post will be ,A and the second will be B.

A1: Yes, Mutable can be complex, especially with the increased complexity of a Metahuman character. Like I mentioned before, we are looking to make it easier in the future with the dataless setups, but those are not available yet.

A2: The mesh morph node’s ReshapeSkeleton feature does it’s best to move the skeleton based on first finding a triangle to do that with, then stores barycentric coords and distances along the normal. Once morphed it uses that data to reposition the bone. The problem is that for something like eyes, it’s not exact.

A3: That is correct, each modifier is a node that is an instruction and it will need to be duplicated in places where you do the operation again. I mention this below, but you may not need to do it for every eye part, but you will have to do it other parts to apply the morphs.

A4: Correct, in the Metahuman setup, the body drives the head. We use a few tools to help with this that you will need to make use of or change the LODs are setups to get rid of the issue. The LODSync component is the primary one.

In regards to A2, Another approach for this might be to separate the two things you need to do into different actions. Metahuman generates a skeleton for each body that it creates, you can use that skeleton as an offset position for use in mutable. So first you create your morph target for your head based on your new body type and you set that. Then for the eye you can use a pose asset which has the delta in side of it between your base and your “fat” body type to position the skeleton. In Mutable that setup would look something like:

[Image Removed]

The problem is that this would not be blendable, which I think is what you’re going for?

B1: I’m not sure what this means, but in the simplest version of your case, I would set up the eye to work (albeit the simplest version) with your body type mesh morph like this.

[Image Removed]

Note how the eye is slightly inset. This might be ok, but might not match exactly. Right now in this example, only that eye’s mesh morph node has the reshape skeleton on. The rest from the body and head do not.

B2: My previous suggestion should solve this, by moving the skeleton reshape to just the eye, you should be able to use just the normal morph stack definition and then applicate to do the rest of the head and body work separately. And don’t forget, like my previous post, you do not need to do this for every eye LOD.

B3: This indicates the issue that the eyes have different UV layouts and you will need to define which one to use in Mutable. There is a property on each texture pin to define that.

[Image Removed]

B4: I might not understand your question entirely, but async operations from mutable are setup in such a way that you cannot sync them. The body and it’s setup may take much longer than the head. “When the head and body deform simultaneously, they do not deform at exactly the same time.” Can you provide an example of this?

Hopefully that helps a bit more.

Dustin

Thank you for your response. I have resolved some of the issues, but there are still a few points that remain unclear:

  1. I need blended deformation, which means I cannot use PoseMesh. I have carefully studied the nodes of PoseMesh, and perhaps adding a weight function to it would be a good solution. After all, its core mechanism is to modify the skeleton through BoneTransform.
  2. Regarding MeshMorph, your approach aligns with the current requirements. However, this requires adding such a node everywhere. Initially, I used MeshMorphStack as a dynamic variable for transfer and modification, but it encounters bugs during transmission in MacroLibrary and thus is unusable. It is hoped that this workflow can be optimized in the future. (From a developer’s perspective, we would prefer to apply deformation in the MeshNode first and then customize each MeshSection—but the current functionality works in the reverse way.) Regarding the issue where the Body and Head components call Async simultaneously, I now have no choice but to modify the deformation to run at runtime. I will wait until the developers confirm the final options before generating non-runtime deformation via another CO (Component/Configuration Object, context-dependent).
  3. As you mentioned, these processes are asynchronous, so there may be inconsistencies in their generation times. While I can wait for both to complete before applying the deformation, the visual performance is not smooth. To help you understand: imagine the game runs at 60 FPS, but when I adjust the deformation parameters using the slider, the updates only take effect at 20 FPS or a random FPS rate.
  4. I did overlook the UV Layout Mode earlier, and now the error message has been resolved, which works perfectly. However, I have identified a new issue: MetaHuman’s Eyelashes only have 4 LODs (Level of Details), while EyelashesGroom has 8 LODs. This causes a warning to be generated in Mutable.[Image Removed]
  5. I would also like to know when you plan to complete TableNode’s support for Groom?If support cannot be provided in the short term, could you offer some suggestions?

Thanks~

I need blended deformation, which means I cannot use PoseMesh.

Yes this makes sense. Currently the only tool we have for this context is the reshape mesh with reshape skeleton options.

We would prefer to apply deformation in the MeshNode first and then customize each MeshSection—but the current functionality works in the reverse way.

I’ll discuss this with the team and raise this use case with them, but the upcoming dataless asset approach might help you here. We’ll have more info in the near future on that. Currently, how you structure your data matters a lot to how many nodes you use.

While I can wait for both to complete before applying the deformation, the visual performance is not smooth.

Runtime generation of the meshes takes time, especially with Metahuman meshes. Assuming you have mesh streaming on, it is on by default, visual performance will mostly be affected by how heavy your content is and your hardware. A strategy I’ve employed in a character customization setup in the past is to do some edits through mutable, clothing and large mesh swaps. For detail items like morph targets and texture coloring, I preview those edits through other calls and not through Mutable. Then once all of the data is gathered for your character customization, you only make the 1 call to Mutable to finalize your mesh.

That said, this is important for the Mutable team to address in the future and they are working on performance improvements in general.

However, I have identified a new issue: MetaHuman’s Eyelashes only have 4 LODs (Level of Details), while EyelashesGroom has 8 LODs. This causes a warning to be generated in Mutable.

This is dependent on how your graph is setup now, but ultimately you are trying to have the eyelash groom lod settings take their information from the eyelash meshes, you will need specify a LOD scheme for the eyelash grooms seperate from the meshes.

I would also like to know when you plan to complete TableNode’s support for Groom?

I’m getting an answer for this, but unfortunately, there isn’t much of a workaround for this in terms of direct support. The Groom constant node has a set of requirements for its setup, and the best way to incorporate multiple groom options is to use the ObjectGroup setup and create separate child objects for each of your hairstyles.

Dustin

Thank you for your reply, but there are still some questions I hope you can further clarify.

I’m using MetaHuman to create morphing functions. However, as you can see from this image, there are hundreds of morphs, and in the details panel, we can’t hide those that don’t need to be modified.

[Image Removed]

I have indeed created multiple float nodes to achieve this function, but due to the large amount of data, I have to create hundreds of nodes.

It should support more modular configurations, such as modifying floats and strings externally via dynamic variables or DataTables.

I hope the process can be further simplified in the future. Thanks

I’d also like to express my desire for more parameter support. In my case we would like to set arbitrary asset references and values for behavior components etc. These are not part of a visual material or morph target parameters but would live in the CustomizableObjectInstances.

It makes some sense to leverage Mutable for this as these arbitrary parameters are correlated with the visuals. I’d like to use the Population tooling to influence their distribution. (eg a weak looking PopulationClass should behavior more timid while a brawny or scarred looking NPC would be more likely to roll an aggressive behavior.) It feels very hacky to try to add these in through dummy visual assets or morph targets or some such thing but using the Population parameter weight works like how I would like once I cheat in a param.

Is there perhaps a tutorial or example of how to make new nodes that add these piggy-backed parameters to the CustomizableObject model? It perhaps makes more sense to add additional parameters to the population system only, but I would think that ultimately the CustomizableObjectInstance would need to hold these values.

I tried to dive into the code but without some direction it seems very dense.

Good.

Additionally, I would also like to ask how you manage the assets for males and females?

Do you treat them as subclasses or use mesh switches?

The mesh switch can only use one Mesh section, which makes it impossible to better expand the material balls. ( Gender Body, Gender Head, Eyes, others.)

This approach is not shown in the mutable example.

I’m really looking forward to your speech video.

I have another question regarding the generation of Meta Human head LODs.

The face is a separate Actor Component, and the eyes and teeth are added as child components.

When generating LOD1, it causes the base mesh section of the face to have material missing.

Since there are 8 LODs in the head, I have to repeatedly create nodes to generate data. Is there a way to simplify this process?

[Image Removed]

Hello, this is a case I have prepared.

It can reproduce the problem, and I’m looking forward to your solution.

In addition to this bug, you can check that the case uses a large number of Mesh sections.

As the number of sections increases, the number of nodes will also become extremely large.

It is hoped that this issue can be optimized in the future.

Thank you.

Thanks,

The issue is arising due to the LOD strategy and how the LODs for the Metahumans are set up. The automatic LOD strategy on the head won’t work well for the Metahuman head because it tries its best to collapse down the section indices. Still, the Metahuman head will replace whole indices with brand new material sections. Then, when you attempt to add the teeth to that using the inherit from parent component strategy, it fails.

So a potential strategy for the example you sent could look something like this (and hopefully this can highlight how you can think about it in terms of managing complexity as well.)

In your CO_MHC you keep the body example simple and in herit the lod setup from the primary mesh. This means you only need the one mesh section connected and how many lods you want defined.

[Image Removed]

Then in CO_MHC_Head we add a new mesh component for the head only focusing on the head meshes, no eyes teeth or other elements. Because of the head mesh complexity you need to handle the lods for each mesh subsection independently and use the “Only Manually created LODs” strategy.

[Image Removed]In the CO_MHC_Teeth you would need to do some special work for the highres things like eye caps and eyelashes which only support LODS for 0-4. For the teeth and eyes, we do Automatic from mesh. For the eyelashes we do manual, defining each mesh subection. For the eyeshell and lacriminal fluid we do automatic from mesh.

[Image Removed]

This keeps the setup as minimal as possible in terms of connections while focusing on maintaining the individual parts. The main problem is that with this setup we are recreating the base meshes as they currently stand that already exists as a pass through with the highest res materials and textures. The next step would be to expose the materials information and textures and then start integrating that into the graph. Metahumans are complex and do require more nodes because of that.

I will recommend, depending on your project type, start with prioritizing LOD 2 or 3. If you’re working on a game, the LOD 0 and 1 are really meant for cinematic and film work.

Dustin

  1. Thank you for your patient guidance. I have now resolved the LOD issue, but “mutable” is more complex than I imagined, and I still have a few questions that I hope you can help me answer.
  2. When I use the mesh morph node and check “ReshapeSkeleton”, it causes the body bones to deform, and then the animation gets distorted. I want to know how to use it correctly here? The result I hope for is that the skeleton can also complete the deformation after the morphing.
  3. After the head is morphed, do sub-components such as my eyes and mouth also need to use the same MeshMorph node? Currently, I need to create them this way, but the process is very cumbersome.
  4. If the body bones are excluded when the head is morphed, an incorrect animation effect will be obtained.
  5. For easier transfer, I have moved both the mutable and animations to the directory “bodyShapeDof4”. You just need to place it in the subdirectory of MetaHumans for an update. Thank you.
  1. when I use MeshMorph, sub-components like the eyes can only achieve synchronous deformation through stacking or duplicating identical nodes.[Image Removed]
  2. When I place ReshapeSkeleton before other deformations, ReshapeSkeletonMorph will fail; if I place it after other deformations, those other deformations will become ineffective.
  3. When I dynamically modify materials and textures using TableNode, it prompts such a warning.[Image Removed]
  4. When I create two CustomizableSkeletalComponents (one for the head and one for the body), how can I ensure that their UpdateAsyncResults are synchronized? Currently, there is an inconsistency in the intervals between them. When the head and body deform simultaneously, they do not deform at exactly the same time.