sceneDepth vignetting

Hi,

We are using UE4.16 for a science project and need to extract sceneDepth, however it seems sceneDepth exhibits some vignetting on the edges of the screen.

I’ve reproduced this problem in a clean project based on the Flying template. The only modifications I made are:

  • I created a post process material as shown in the screenshot
  • I applied this material to a Post Process Volume.

Material screenshot:

Then I’ve positioned a camera facing a wall perpendicularly, created a screenshot and then rotated it (without any translational) movement and made another screenshot. The distance between the camera and any point on the wall should, therefore, be the same in both screenshots.

Real render of the screen (for representation):

However, I’ve measured the color values of the same spot on the wall and they are quite different. As my depth material is from white to black (with white meaning close and black meaning far), the color of the same spot is much darker when it’s at the edge of the screen.

To make this problem more apparent I imported the screenshots to Photoshop and increased contrast.

Can anyone please confirm this is the case with sceneDepth node and any possible solution?

Thanks!

Scene depth is not a distance between camera and pixel. It is Z axis coordinate of the pixel in view space.
Thus, what you are describing is correct behavior.

Oh ok, so in order to get that distance, I should render Scene Depth in World Units buffer, right? From what I was able to test, the color values are independent of the camera rotation.

You can calculate distance between absolute world position and world space camera position.

Deathrey - I believe you, but I don’t understand - could you help explain it better than Epic?

“The PixelDepth expression outputs the depth, or distance from the camera, of the pixel currently being rendered… The SceneDepth expression… is similar…”

This doc makes it sound like Scene Depth relates the pixel to the camera, but I’m totally open to my reading being wrong or another take. I don’t understand exactly what is meant by a “Z axis coordinate of the pixel in view space” versus a camera-based depth to a World Space coordinate, and I expect the difference is important to troubleshoot certain problems.

Google isn’t helping much - some otherwise helpful blogs and videos treat Scene Depth like a depth from camera shortcut, but I’m wary - from what I’ve read around this site, you seem very proficient, so thought I’d ask if you have time to explain.