I’m in the process of delving into some more complex programming, including shader programming, and I am working on a simplified water surface effect. I intend to have a simplified simulation (nothing physically accurate and complex as it’s not required), using a custom shader. Having done extensive research, I think it should be possible for me to create such a shader in HLSL and generate a normal map + heightmap texture for use in a material.
Assuming I can make such a shader, there is one effect that I desperately need. Right now, with a similar but inadequate effect, I simply use BeginOverlap to detect whether certain actors/pawns overlap the surface, and then spawn a “splat” ripple based on the center of the collision object. Then use a Render Target to bake out into a texture (as in the Blueprint Render Target sample). What I want is to detect the area of overlap, not just the point where it starts overlapping. Basically, I need a splat map that detects the whole *area *(pixels) of the surface that the meshes are overlapping with. This so that the shader generates the waves based on the contours of the meshes and not the single point where they start overlapping, or the center of the mesh.
The water surface is not large; roughly the size of a small pool. The splat map also doesn’t have to be high-res, an approximation of the contours is sufficient (based on Simple Collision for static meshes and the Physics Asset for skeletal meshes, with Complex Collision as a quality setting). I would prefer doing all this in a shader and make it run on the GPU, then simply outputting the resulting ripple texture to a Texture2D variable in the class to be used by the material.
The basic question is: is it at all possible to generate such a “splat” map during runtime and then use it in a shader? And if yes, how?
Well, you need to either represent cross section of the object in simple shapes or combination of simple shapes, or use something like signed distance field, to get accurate object cross section at water level, if object’s shape is arbitrary.
While I have some knowledge of what distance fields are, it’s not enough to picture how they can be applied here. I know that I can’t use the built-in distance fields, since those only work for DFAO, DF shadows, and with materials; they don’t work with skeletal meshes, and are not really sufficiently detailed enough for this kind of effect (or else they become too expensive in general).
So how should I use signed distance fields? How can the shader know what is overlapping/intersecting the surface plane and generate a cross section?
Okay, after having tried stuff and failed, and then continued with other important work, I’ve been thinking about this some more.
The basic gist is that any object, be it a static or skeletal mesh, should be able to interact with the surface. That means no distance fields, since they only work with static meshes. Similarly, since not every object should interact with the surface, whole scene effects such as depth fade and such can’t be used either. I did some more research and found some work done in Assassin’s Creed III / IV. AC uses spheres placed around the hull to interact with the water, which sparked an idea:
Fill up the mesh with small collision spheres (not too small and not too many) to approximate the shape, taking animation for skeletal meshes into account (perhaps by using a physics asset with only spheres). It doesn’t have to be precise, as long as the spheres are not sticking out of the mesh too much and there aren’t any large gaps. Then it’s just a matter of detecting the spheres overlapping with the surface and “paint” the surface’s render target based on the location of the spheres to create the splat map. This is then updated each (couple of) frame(s) to check if the spheres are still overlapping.
Since we’re only using a small water surface and only one will be used at any one time, the cost of the many overlaps should not be too much of a problem. There might be ways to optimize it based on distance and not have it calculate all the time.
I believe this is similar to how NVIDIA Flex works. Soft-bodies, fluid volumes and cloth are all made up of spherical particles that interact with each other and affect the connected surfaces. In this case however, we don’t use them as particles/fluid/soft-bodies, but merely to detect objects interacting with a surface.
So, the question: Is there a way to quickly do this without having to manually place all the spheres? Fill up a static mesh to approximate the volume (and control the resolution which controls the size and amount of spheres)? I imagine for skeletal meshes I just create a separate physics asset with only spheres and make sure it’s only used to interact with the water surface, then somehow synchronize it with the visible mesh (as the skeletal mesh itself will probably use a standard physics asset for all other collision interaction).
I’ve made a very rudimentary system where I make a grid of points and each point fires a line trace (with bFindInitialOverlaps set to true). If a line trace hits, the value of the grid is set to 1, effectively creating a mask. All this works fine, but the image is at a resolution of 128 which obviously means thousands of line traces every frame (as long as the number of overlapping actors > 0) and thus lower FPS.
One alternative I’m thinking about is only fire two line traces for each column (so in this example that will be 128 * 2, as opposed to 128 * 128). Then I detect the hits for each object to find the points that are between the hits and therefore inside the object. However, multi line traces don’t register where they stop overlapping, and also don’t register if they overlap with the same object again later (if it’s a convex mesh, like a trace going through a foot and then hitting an arm).
Another idea that I had, but which I wouldn’t know how to implement, is based on what Ryan Brucks did for the volumetric effects in the Fortnite cinematic. He basically created four scene capture 2d components to render a mesh from all sides and then multiplied some result to create a volume texture. I would then only need a single slice of that volume. But again, I don’t know how to implement that, and if it can be done in realtime for any number of meshes within a small area.
At this point, I don’t know of any other solutions to create a mask where specific meshes overlap with the plane. The final result will have to basically be a 2D signed distance field in a compute shader that I have written. I then want to process it to create a mask that is used in the shader for computations as well as output to a texture that can be used in a material (similar to the image shows).
The only other solution I can see that could work, and may not be difficult to implement, is to fill an object with invisible particles (when Niagara becomes available, if it’s possible) and basically use the particle collisions to “paint” on a render target, using the draw to render target method.
EDIT: I wonder, based on the creative solutions for the work he did, could @RyanB give any suggestions?