I’m trying to use Unreal to create ground truth depth for a computer vision project, and I’m seeing a systematic bias when I compare the rendered depth to the camera’s world coordinates. Am I missing something? Is this a bug? This is my setup:
- I have a simple scene of a a flat plane centered at (0,0,0), and a linear sequence of the camera looking normal to the center, moving from 1m away up to 500m away. The depth at the center should return the same value as the camera coordinate.
- Depth is rendered as a post process material in MovieRenderQueue into an EXR. To get around some floating point accuracy issues I was seeing, I used multiple color channels.
depth = R + G*1e-3 + B*1e-6
- I have a repeating sequencer event that saves the camera’s world position at each frame.