Pixel Depth Offset -- how to get the correct projection without perspective distortion?

From what I can tell Pixel depth offset pushes the depth back perpendicularly to the camera near/far plane. How can I instead push it back along the pixel vector? I basically just need to know the angle to the pixel from the camera origin.

As an example, if I have a 90 degree FOV camera and am wanting to push world space depth back by 10cm on the leftmost (centered vertically) pixel, the angle I need is 45 degrees, and I would feed the pixel depth offset pin 10cmcos(45deg) = 7.07cm, but if I’m drawing the pixel in the center, I want to instead feed it 10cmcos(0deg) = 10cm. Is there an easy way to get the correct angle to feed to cosine? The camera vector node is similar to what I need, but it is in world space.

I ended up doing this:

Seems to be working even with extreme FOVs, is this the right way to do it performance wise? (The final multiply A pin is the depth and the result is then being fed into pixel depth offset).

This happens because the push is happening X units along the camera vector. If you want it to push by the apparent same amount from all angles, you need to basically project the depth further at glancing angles so that it hits the virtual plane under your surface.

Do Dot product of CameraVector and VertexNormal and then divide your pixel depth offset by that.

This image should make it clearer what is going on.

The red line is the length your pixel depth offset gets without modification. What you want is the green line which is projected out to hit the virtual surface inside your actual surface.


In that example its assuming the total height is 1, but scaling the height doesn’t change anything (other than the value z would be scaled by the height scale).

Hi Ryan, in my case I think it’s a different issue. I am overlaying a Kinect depth view over my game by having a camera look at a plane that covers the same FOV as the Kinect and uses pixel depth offset. So I really am wanting to push the depth out along the camera vector.

What I found was that when rotating, not translating, the camera, things would get pushed too far on the periphery. Where if the depth offset were pushing depth along the view ray, you would expect it to stay the same (with camera rotation) only, because the length of the ray to each point would stay the same.

From what I read, it seemed to be happening because the z buffer is a bunch of depths orthogonal to the clip plane. Though in wasnt sure it was stored like this in unreal, or if Pixel Depth Offset already accounted for it. For small FOVs the difference isn’t very noticable, but at large ones it is towards the periphery. I’ll try and make a simplified example project.

Ah I see. I misunderstood. Same basic principal is at play.

It may be slightly cheaper to do Dot(CameraVector, CameraDirectionalVector) since a transform most likely performs a few dot products.

@muchcharles & @RyanB do you mind posting the image again? after the forum move the image linked is lost