Splatoon or Portal 2 like paint

Hi, I want to implement painting in my game, pretty much like painting in Portal 2 or Splatoon.

I’ve thought about doing it with decals, but then hundreds or even thousands (maybe) of decals might create a performance problem, no?

After that I’ve tried implementing a second texture that exists over the normal texture in paintable objects, and when you hit it with the paint gun it finds the UV coordinates of the hit location, and dynamically paints the texture at the location (requiring me to change engine source code, and keep vertex data accessible, not to mention this painting is a big for loop that runs on the CPU). It more or less worked, but I have had a few problems with that.

First of all, I rather have an implementation that does not require an extra texture for every paintable object, and I don’t want to change every single material in the game that is paintable to contain a dynamic texture.

Then a problem I had is painting two objects that are near each other. I can only paint one at a time even if the paint hits both of them, and I’ve come up with very convoluted ways of overcoming that, but its still not as good as I’d like it. The problem is the only way I found to determine the UV coordinates is a raycast, a sphere sweep will tell you nothing.

Another problem I have is that if the paint is big enough, and painted on the edge of the model then it will overlap over the UV coords enough to paint a different side of the model as well.

The last problem I have had is more complex models, often the UV coordinates do not translate directly into the location on the model. By that I mean a place 10cm to the right on a model might be on the other side of the UV map.

Deferred Decals should solve all those problems and be much more simple in every way than what I’ve tried to do, but its performance in big numbers worries me.
Anyone have a good idea on how to implement this? Thank you very much :slight_smile:

well you need to use dynamic textures and use rama’s trace to uv coordinates modification,and you can create a texture like that

We’ll, I implemented exactly that, but it’s kinda wierd and limited. If I got the UV coordinate of the hit, how do I paint the texture according to the UVs so even if the UV are not organized in a way that makes sense it still looks right? Like, a face 10 pixels to the right in the UV map might be on a completely different place on the mesh, how do I account for that when I paint the texture?

Painting straight on the texture worked just fine for things like planes, but even on cubes it started making a problem. How do I paint the same size on every mesh too? If the world scale is bigger, then the paint will be also bigger.

Also, traces don’t work well with two objects at once, and traces with a diameter do not report the right hit position.

Those are the problems I’ve had, using decals solve all those problems, but I’m not sure how well it will perform on the GPU… What do you think?

well i think decals would be too way heavier,you could technically just do alot of short traces for example in a circle

Hmm, that’s actually a good idea, but how do I find which area of the UVs to paint using a bunch of coordinates that each correspond to one of the circles edge location (indexed), and do it even if the UVs are not organized nicely? Need to use both the UV information and the edge locations for that and I have no idea what to search for that technique, and I don’t have nearly enough experience to implement it all by myself.

bumping this thread

I would like to know too the reply here, cause I tried to do something more style Portal 2 but no idea how can do that at all. You can get the UV of the meshes but then I figure you need a layer in the material or something but no idea, with the texture capture can work but for all meshes in the map ?

The method here is to first unwrap the target meshes facing the camera and render out either their local or world positions into the UV layout. You can atlas all of the meshes into one big atalas if you have a reliable way of indexing. Ie, keep track of which “index” they are and use the 1d to 2d conversion material function the find the right sub frame. Then you simply perform a sphere mask using the hit location (either local or worldspace depending on what you decide) and then the spheremask will automatically wrap around seams even if they are not touching since the spheremask will use the positions from the texture as the position to compare instead of actual worldspace or UVs.

Well no idea how manage at all that texture atlas and second problem I see here is how you manage more than one spheremask, I can manage many but the overload in all the materials is real.

You would render one sphere mask in a time into a render target, and then it would stay there forever. So at any one time you are just rendering one but they would accumulate. In your actual materials you would just reference the final render target and there would be no sphere masks or anything. The spheremask just happens at the stage where you apply a single brush stroke.

Atlasing would have you be figured out manually, there is no pre existing system to do that. Basically it would be like a flipbook. I suggest you ignore that part to start with and just get it working for a single mesh.

Is possible to see a video or sample or something from Epic about this with images ?

The only way I see was this and I can’t do that for every mesh in the map tbh

The effect really does not scale well to every mesh in a map without a lot of custom work that you would need to do. To my knowledge nobody has applied an effect to a whole level. That is why I suggest you start small and get it working for one mesh before attempting that. You will mostly have to sacrifice resolution in order to support more meshes. Getting it to work with one mesh should be fairly straight forward.

Try using the material function “Unwrap UVs for render”. You will need to create a material instance to use it since it has 2 parameters that are not exposed as inputs. This was only really meant as a helper function to be used with the R2T blueprint stuff but it can also work by itself if you place an orthographic scene capture actor pointing down. Just make the material instance, set “Size” to match the ortho width of the viewport and then make sure to set “unwrap” to 1. Then make sure you render out something like local position or worldposition into the emissive and make the material unlit. Then make sure your RT is set to HDR and capture the unwrapped mesh. Then the rest should be simply like thread you linked where you render into another ‘paint RT’. Your paint material would use the unwrapped position texture as the first input to the sphere mask.