Seams in Render Target Texture applied to character

Hi there!

Following this talk, where is shown how to perform dynamic painting on a character, I’ve found an issue that I’m unable to solve. I created a similar material that unwraps the character using WorldPositionOffset. The emissive output, in this case, gets the pre-skinned position. Next I use a blueprint to capture the unwrapped character into a render target.

The problem is, when I use the render target with a sphere mask, weird seams appear and I don’t have any idea about what’s causing them. I tried using a smaller ortho width in the scene capture options (a very subtle difference, just to test if I’m sampling the background) but looks like it’s not the case. In fact, if I plug the render target directly in the material output, there aren’t black seams but incorrect values (as can be seen in the fourth image).

Any help would be really appreciated!


The material applied to the render target that generates that blue/white/purple mask has to be wrong.

First.
Given what was shown, the values within are supposed to represent normalized world location values, so the colors would look much like a vector field. What you are showing isn’t it.

Second.
You are generating the seams by having the clear color of the RT be black. Use a neutral color for that, so you eliminate the issue.

Third.
multiply by 1000 isn’t a random value. 1000 is the size of the render target. Not sure why he didn’t use power of 2, but 1024 is probably better, with a matching rt. If they don’t match you will have issues.

Fourth.
I like the vertex interpolation idea, but its not the same result as world position - material offset, is it?
could be why you are getting a wierd starting mask.

The overall idea is that the hit location is stored as a color value and used to paint in on the second pass. Or 3rd pass. The way your images look, that hit data is not correct - and the value 4,0,50 makes little to no sense tbh.

also, the talk was 2017. They may have added the UV parameter to the hit result after that. And if so, wouldn’t the whole first step be unnecessary? The purpose of it all is to return the 0-1 uv coordinate of where the hit took place on the character…

I would try to just use that first, as each RT pass adds weight.

The projection part likely still needs a uv texture.
expanding from the point you pick by 5 units on a non-flat non-sequential UV may still require the sphere mask and uv ranged method even if you have the uv hit location without it.

Try adjusting the points above. See what it does. I’ll be trying my own mud implementation partially based on this. I left the rt texture hooked in my materials and was partially stumped on how to get the hit location until I saw this / didn’t know sphere mask worked like this either.

Hi MostHost!
Thanks a lot for your answer! Before seeing this response, I tried to render the mask directly (instead of sampling the local-bone positions from a texture, I used the sphere mask in the unwrapping step with the original positions). The render target is now updated using the “additive” mode from the scene capture component. However, I’m still getting seams (but a lot less evident as before).
I’ll check what you said and see if it improves. Lots of thanks!