Hi
I’m trying to simulate a depth camera.
After some research I found that this should be possible with a ASceneCapture2D object and set CaptureSource to SCS_SceneDepth or SCS_DeviceDepth.
Following this post SCS_SceneDepth should output values in cm, but I get values in [0,1].
The calculation with SCS_DeviceDepth seems to work (but alpha is always 1).
I setup the TextureTarget as following:
renderTarget = UCanvasRenderTarget2D::CreateCanvasRenderTarget2D(GetWorld(), UCanvasRenderTarget2D::StaticClass(), PictureResolutionX, PictureResolutionY);
renderTarget->bHDR = 1;
renderTarget->InitAutoFormat(PictureResolutionX, PictureResolutionY);
And read the pixels in the FLinearColor struct.
If I read the red value of a SCS_SceneDepth pixel I get values in [0,1] on the contrary to the promised cm values.
This wouldn’t be a problem by itself, but the [0,1] values from SCS_SceneDepth (and also from SCS_DeviceDepth) don’t seem to be linear, so I don’t know how to convert them back to cm values.
As I’m trying to simulate a max range on the camera this is a problem. I need to scale the values that the colors white to black map linear from 0cm to e.g. 350cm.
I tried to make some measurements with accurately placed planes and discovered an other problem. The cm range behind the [0,1] values do not seem to be static. If a plane fills the whole field of view the values get rescaled so that the planes is displayed in white.
So my Questions are:
- How to I read the cm values from a SCS_SceneDepth pixel?
- As I would like to have the best possible precision. How to i convert the [0,1] values back to cm?
- And last how do I prevent the above described rescaling?
Thanks for any information to my questions. I found near to nothing to this topic that seems to work and site like this are not helping at all.