I have a crazy idea here which I need help with on the implementation, more to how would I approach this.
In my scene, I have a landscape terrain with some tessellated snow material applied on it. This material deforms as my character walks on it. Each time the character walks, it projects an alpha texture directly down, which is just a blurred circle on the terrain, and spawns it at the player’s position. This texture tones the tessellation down to 0.
Up to this point it all makes sense, hopefully. Now, since we know that tessellation is a shader which doesn’t work with physical collisions, I have been thinking of a way to “fake it” with different objects which fall on the terrain, using the same technique. BUT, in order to achieve a more precise deformation, what came up to my mind is this: What if I could generate a projection of a 3D object which is above ground, no matter what is his orientation, and use that projection as the mask for the deformation?
So in other words, think of an area light which shines above an object, and for EVERY FRAME -> the objects casts a shadow on a flat surface -> use that shadow as a mask on the tessellated terrain.
If you guys can help me with that OR give me some other way of achieving a snow deformation which works precisely with physical objects, that would be AWESOME. Ideally I would like to achieve a level of preciseness such as in the game “STEEP”, but that’s a dream for now haha.
P.S. I’m sorry if that doesn’t really fit the topic, but I figured it should go under either Rendering or Content Creation
maybe you could use scene captures that would store depth from terrain to object in a render target, then draw that depth into render target that will be used to mask tessellation? or just use one scene capture to capture all deformers, and then compare to current terrain depth?
Thanks for the reply! Can you elaborate on that a bit more? That sounds like a possible idea.
From what I understand, Scene Capture 2D captures the scene from a specific camera and makes a 2D image out of it. I suppose I’ll have to set up the capturing camera in such a way that it points downward and only captures a certain object (say the pistol that is being dropped on the ground). That COULD works as long as the camera will capture in orthographic mode and hence will give me an accurate image of the object from a top view. Then I will still need to convert the result to a pure white silhouette (my alpha map), which I can then spawn beneath the object to tessellate the terrain.
I’m not sure what did you mean by store depth from terrain to object? I don’t really use a terrain depth mask as I don’t deform the terrain using a depth mask. I just displace it evenly out of the actual terrain plane (world displacement + tessellation), then just mask out area where the displacement needs to go down, by spawning an alpha texture on top of the tessellated terrain’s vertices.
if height is known, then its just a matter of capturing depth from that height to object, that is upwards. then find difference between distance to object and your default displacement.
functional example would involve capturing depth from snow to each object that should interact with snow, then drawing material to render target that would compare past snow height to current depth to objects, so it would decrease height if depths are smaller than snow height.