I’m working on a weather system, and I’ve got a material function that applies wetness to any material (darkens/desaturations diffuse for dampness, scales roughness with noise masks for puddles forming, reflectivity of standing water) based on the angle of the face (standing water won’t form on vertical faces, etc) but currently on flat areas under cover, I don’t have a way to mask out the effect. I’m wondering if there’s a way to get white/black values (1/0 values for lerp alpha values) from scene depth visualization based on a scene capture actor placed high above the world pointed down, parallel to the Z axis, and then just apply the wetness globally via a post process material instead. (Like a blendable.)
So first off, how would I ensure that the render target for the scene depth node in the material editor would be the scene capture object? (Forgive my probable terminology errors, I’m not 100% on what I’m trying to say here haha)
Also, say that the scene capture object is pointing down at the ground, and in the middle of the scene there’s a block above the ground. I’d want the top of the block to render white (if I’m visualizing from the scene capture’s point of view) the and the ground everywhere EXCEPT directly below the block to render white, and the area on the ground beneath the cube to render black. Is Scene Depth even the right resource to do this?
Thanks in advance, and sorry for any terminology errors.