Reading CustomDepth buffer in translucent materials using orthographic camera?


I’ve created a translucent material where I intend to read from the CustomDepth buffer, and use this data to control whether certain the rendered texture is translucent or not. Essentially, the idea is to mask the parts of certain actors that do NOT overlap with my actors that render to the CustomDepth buffer. Clipping sprites with other sprites, if you will.

I’m using an orthographic camera.

All my sprites are unlit.

I’ve got the following material down:

And I’ve set an actor to render to the Custom Depth buffer, which I verified in the editor, and in game using the “r.BufferVisualizationTarget CustomDepth” console command:

However, none of my actors that are using my custom material seems to care about this custom depth buffer, which I’ve tried to visualize with the following material:

With this material applied to my actors NOT rendering to custom depth, they show as 100% white, not changing their color depending on whether or not they overlap the rendered data in the CustomDepth buffer.

What gives? What am I doing wrong here?

I first thought that maybe my orthographic camera couldn’t render to the CustomDepth buffer, but visualizing that buffer seems to show that indeed it can.

Nevermind, turns out I was parsing the depth value incorrectly, with regards to how my orthographic camera was set up. Turns out, reading from the custom depth buffer inside translucent materials is totally fine and works like you expect. :slight_smile:

Could you please explain your solution? Im doing something similar and cant figure it out.

So, the issue for me was that I needed to convert the CustomDepth buffer data from the actual values (the depth, which I assume could be anything from 0 to max render distance) to either 0 or 1. At first I tried doing this by just clamping the values to 0 to 1, but it seems the depth buffer wasn’t actually 0 by default buit instead the distance from the camera to the attached target.

So what I ended up doing exactly was dividing the result by a value that is higher than the cameras distance to its view target, then taking that value into a 1-x node and then finally into a ceil-node.

This means that all values which are higher than the cameras distance to its view target will be 1, and all other values will be 0. Coincidentally this happens to be the values which are actually written to the depth buffer.

Typing all this out makes it seem super complicated and way more work than ins necessary… I’m sure it can be simplified greatly!