Question about Scene Depth

I’m working on a weather system, and I’ve got a material function that applies wetness to any material (darkens/desaturations diffuse for dampness, scales roughness with noise masks for puddles forming, reflectivity of standing water) based on the angle of the face (standing water won’t form on vertical faces, etc) but currently on flat areas under cover, I don’t have a way to mask out the effect. I’m wondering if there’s a way to get white/black values (1/0 values for lerp alpha values) from scene depth visualization based on a scene capture actor placed high above the world pointed down, parallel to the Z axis, and then just apply the wetness globally via a post process material instead. (Like a blendable.)

So first off, how would I ensure that the render target for the scene depth node in the material editor would be the scene capture object? (Forgive my probable terminology errors, I’m not 100% on what I’m trying to say here haha)

Also, say that the scene capture object is pointing down at the ground, and in the middle of the scene there’s a block above the ground. I’d want the top of the block to render white (if I’m visualizing from the scene capture’s point of view) the and the ground everywhere EXCEPT directly below the block to render white, and the area on the ground beneath the cube to render black. Is Scene Depth even the right resource to do this?

Thanks in advance, and sorry for any terminology errors.

Anyone have any ideas?

It looks like scene depth will give you this info.
Simple shadow pass calculation is exactly this - scene depth from camera view, then depth check to see where the object is in shadow and where it shadows itself. In your case it’ll be as if directional light is casting shadow exactly from above. This will give you the 1/0 mask that you want.

I would go with precalculating the mask and updating over it with dynamic objects only.
When you’re talking about a system, maybe you want it to work right away on everything, so it has to be fully dynamic. Maybe using light and shadow code to create new actor for this is the logical thing. The cost will be the same as some low res shadow and you can even use the logic behind shadow LODs to make it faster.

Maybe there’s some other more clever way :slight_smile:

Yep, capturing scene depth from top is legit way of masking out surfaces, covered by other objects. :smiley:

You could also try enabling distance fields for the project and raycasting upwards a bit and then checking if things are occluded that way. might be cheaper than a scene capture. You can iterate to find cover within a specified radius if you want.

I was looking into this initially instead of scene depth, but I wasn’t sure how something like a box trace could translate hits to distance field info, short of spawning custom shaped translucent meshes with no collisions along hit volumes using the volume bounds as mesh bounds, and just clicking a button to build those meshes or having them built on beginplay or something, but I had thought that’d be too complex/expensive.

You wouldn’t be able to do a box trace, it would be a simple ray cast in the material. ie instead of sampling “Global Distance Fields” using “Worldposition” you would sample it at “Worldposition + (0,0,100)” and that would check the distance fields 100cm above the ground. Then you get the distance and use that for another check to do iterative checks. Then you just multiply your multiple samples by eachother and you will have set up multiple ray checks. Using an offset distance of 100cm initially lets it get away from the surface so that it can hopefully search further each check until it begins to slow down again by getting near. Try doing ~5 of these checks.

Note that you only need to search a limited distance above the ground, since realistically once the blocking cover is greater than a certain number of units, wind and drops hitting eachother and splashing off nearby obstacles will cause the rain to get underneath the cover.

So I tried setting it up, but I think I’m missing something, it seems like the area between distance check iterations isn’t being compensated for and so some object heights offer weird looking distance field patterns where there’s big pieces missing from the mask. Here’s a gif showing what I mean:

And here’s my material. Ignore the vertex normal stuff, I was trying to get it to only visualize on the top but it didn’t work.

In the above case, it is happening because you want to do a “max (x, 0)” after each “Distance to nearest surface” node, otherwise you will actually be getting the negative values which when multiplied together turn positive again.

But you could also probably get by with less lookups by chaining the result from the 1st DF result to be the offset of the 2nd one. then do it again etc. That way it will search farther if the surface is farther.

I’m not sure by what you mean using the DF result offsetting sequentially, did you mean that for each Z offset? (pulled from the 3 vector node as pictured) I tried using a make-float 3, and running the output from each max node (sequentially) into the z slot of each make-float 3 node, plugging those into the add nodes, but had no luck.

In any event, I just set up a 500 unit offset per increment, and got some decent results:

and the masking setup:

The max height before DF info stops being “reached” for is like 12 mannequins high, as a point of reference. Very awesome. You are my hero @RyanB

Yea I basically meant to take the result from one DF node and then turn that into a V3 by just appending 0,0,DFvalue and then use that as the offset for the 2nd lookup. You’d be able to search greater distances that way. But it looks like what you have works pretty well as it is.

I did this exact effect with scene captures ages ago.

I never did get time to post the non-Z-only version (the math is a bit more involved, need to project points on a line, but nothing too troublesome) and it’s probably outdated (I remember one engine update ******** up some of the gamma conversions) but it ought to be a decent starting point.

One final question, how can I go about softening the edges a bit? I’ve tried using a power node and a cheapcontrast node, no luck with either, but the white/black contrast is essentially a hard line.

That is just because this is in world units and all the multiplies made the values get very large. You could divide by a scalar after each Max node and then clamp each 0-1 before multiplying them together to avoid that. Then your scalar should be the world unit width of the gradient.

if you do the divide at the end it would have to be a huge divisor.

I see, that makes a lot of sense. Thanks so much again for the info, I really appreciate it. :heart:

This turned out great!