I have sculpted a deformation mesh from my main character mesh, baked the vertex positions into the vertex colors of the character mesh, and am feeding it into the WPO input. I am trying to create a material that lets me blend between the two, like a morph target.
I have a working version of this material in Lens Studio (another piece of software with a material graph), so I know in theory, it should work in Unreal. But as I try to construct the same material in Unreal, it doesn’t work. Here is a look at my basic setup, and a gif of what happens when I apply this material. (Note that the character mesh now also appears about 20 ft tall.)
After the vertex color node, I’m normalizing the value range and recombining it before adding it to the pre-skinned local position. In Lens Studio, where this material is working, it’s using a node called Surface Position. I assumed this was the equivalent, but when I plug in pre-skinned local position, it seems as though my character mesh is getting double transforms (offset from his pivot the more he moves away from 0) and blows him way up to a giant. I’ve tried skipping over the part where I normalize these values, and plugged the RGBA directly into add, and the result is the same. I’ve tried a selection of other nodes in place of pre-skinned local material, with different results, but nothing that shows my deformation mesh at all, and is consistently broken and incorrect.
Hi, thanks for your reply. You’re right, the polycount should be sufficient, it’s around 30k for this character. I tried adding in the world position as you said, and I got a very similar result to what I had before. Offset transforms and everything.
Also you are trying to normalize the range by remapping but the vertex paint is already normalized, it can only have range between 0 and 1 and 8bit RGB channels. By remapping you are drastically reducing vertex paint that you have by 25 times. I dont know if unreal automatically remaps vertex paint when importing but it is better to be done in software where you made it.
This behavior is the same for both a skeletal and static mesh. It is called World Position Offset for a reason, after all. The vector you input to the WPO is added to the world position as an offset, regardless of mesh type.
The important difference between a skeletal mesh and a static mesh is that skeletal mesh offsets generally need to be stored in tangent space, and transformed into world space before going into the WPO. If you want the offsets to be relative to the underlying mesh as it animates, at least.
To correctly map the offset values, you would need to divide all offsets by some maximum value factor when baking that will bring them all into the range of 0-1. This way, the 0-1 vectors can be re-multiplied by the known factor in the material to restore the original range.
8 bits, however, is quite low for storing vectors. This is evident even on 8 bit normal maps, and the position error magnifies over larger offset distances. If it is good enough for your needs, then by all means.
Vector Displacement Maps are usually either 16 or 32 bits, so that the offsets can be stored significantly more accurately but obviously the memory requirement scales with it.