What does DeviceDepth option under SceneCaptureComponent2D capture?

I find the depth values to be wrapped between 0.0 to 1.0 as bands. And this wrapping is different for each of the RGB channels of the DeviceDepth captured.

How can I get the absolute depth value in UE4?


DeviceDepth returns the Z value in the 0-1 range. The value is discretized and converted to in an RGBA texture to use the full 32 bits with minimal loss of precision. To reconstruct the 0-1 range, the formula is:

R + G/255 + B/65025 + A/16581375 = Z-value

That is for a RGBA32f render target. If you use in an RGBA (8 bits per channel), you have to divide everything by 255.

If you want the depth in cm, you can use “SceneDepth in R”. This will retrieve the linear depth in cm. The RenderTarget2D texture must be in R32f format. The preview in the editor will be all red as we can’t display HDR texture properly. If you want to validate that you are getting what you want, you can create a simple material to rescale the values in a visible range. See attached image.



What’s with that magic number 0.000556? What does it mean? I am trying to read the depth value in C++ code, like this:
FTextureRenderTarget2DResource* resource = (FTextureRenderTarget2DResource*)this->kinectRT->Resource;

if (resource->ReadLinearColorPixels(this->m_PixelBuffer) == true)

I am not sure is this is the right way to go. But there are some Values in the R-component of the color. But the range is kinda confusing.

Martin can you give some more information about how to get depth values in code?


Just playing around and I noticed the Depth Values in R correspond to cm, but when the object is farther away that 1m the depth values are lost. R will show a value of 65504.0. Any thought?

I followed MartinS instructions and created a material for the user interface domain. I found out that the final color is expected to be in the range of 0-1, so it seems like sRGB is being used here. This explains the magic number used by MartinS.
The red channel returns the distance in cm from the scene capture component, but we have to get it into a range of 0-1 in order to visualize it. In order to do this you need to divide your red channel by the maximum distance you want to perceive. So dividing by 100 would mean you would visualize everything up to a distance of 1m = 100cm. Multiplying by 0.000556 is the same as dividing by 1/0.000556, so here we go. It’s not a magic number, it’s a random number to prove his point.

Thank you, thank you, thank you! I’ve spent hours trying to get scene capture with scene depth in R to work. Your advice wrt RTF R32f is gold.

@Svegn2 Thank for answering the depth conversion part. I am not familiar with unreal engine but have depth maps captured in unreal by a third party tool. These depth maps are in 8 bit RGBA channels (png). I tried converting them using the approach you mentioned above - R/255 + G/255 + B/255 + A /255 but then the values won’t be in the [0, 1] range as (R + G + B + A) < 255 is not guaranteed. Can you please clarify the conversion here?

Also, is there any global scale for the depth maps captured here? Or is [0, 1] is the global scale here?

The formula for 8bit RGBA should be

(R + G/255 + B/65025 + A/16581375) / 255

For reference, the code encoding code is in SceneCapturePixelShader.usf.

Hi, how do you read the depth value in C++, is it through the above m_PixelBuffer?

The problem is you can’t get this RTF R32f back to C++… Any ideas why this format is not supported in ConvertDXGIToFColor? Noone bothered? Should I make a PR on this or is this a bad idea? Is there any workaround (aside from copy-paste the whole call chain up to ConvertDXGIToFColor)?

Ps. Managed to get the distance from PF_A32B32G32R32F + FReadSurfaceDataFlags(RCM_MinMax), but using 16 bytes instead of 4 for the render target breaks my heart (and my performance, a bit).

I think I got a solution that saving the image depth into .txt or .csv file, as follows.
(1) Create a render target, find: texture render target 2d → render target format, set it as RTF RGBA 16F, see screeshot:

(2) Find the option: scene capture of the SceneCapture2D, set the capture source as RGB scene color(RGB), scene depth in A, which means channel A stores the image depth with 16-bit float number format. see screenshot:
(3) The C++ code to read the image depth in channel A and write it into a .csv file:

    UTextureRenderTarget2D* rt = DepthCaptureShowAll->TextureTarget;
	FTextureRenderTargetResource* rtResource = rt->GameThread_GetRenderTargetResource();
	FReadSurfaceDataFlags readPixelFlags(RCM_UNorm);
	TArray<FFloat16Color> outBMP;

	if (!FPlatformFileManager::Get().GetPlatformFile().FileExists(*fileDestination))
	// Initialize a .csv file to store the normlized scene depth
	// Set the maximum rendering image depth as 3000cm, i.e. 30m
	float MaxDepth = 3000.f;
	FString SaveString;
	for (int i = 0; i < rt->GetSurfaceHeight(); i++) { // row
		for (int j = 0; j < rt->GetSurfaceWidth(); j++) { // colum
			// read the depth value in the Channel R (data format is 16bit float)
			FFloat16Color PixelColor = outBMP[j + i * rt->GetSurfaceWidth()];
			float DepthValue = PixelColor.A.GetFloat();
			if (j < rt->GetSurfaceWidth()-1)
				SaveString += FString::Printf(TEXT("%.2f,"), min(DepthValue, MaxDepth));
				SaveString += FString::Printf(TEXT("%.2f"), min(DepthValue, MaxDepth));
		if (i < rt->GetSurfaceHeight() - 1)
			SaveString += LINE_TERMINATOR;

	if (!ReWriteFlag) {
		if (!FFileHelper::SaveStringToFile(SaveString, *fileDestination, \
			FFileHelper::EEncodingOptions::AutoDetect, &IFileManager::Get(), \
			GEngine->AddOnScreenDebugMessage(-1, 5.0f, FColor::Red, TEXT("File save failed."));
		else GEngine->AddOnScreenDebugMessage(-1, 1.0f, FColor::Green, TEXT("File save succeeded."));
	else {
		if (!FFileHelper::SaveStringToFile(SaveString, *fileDestination, \
			FFileHelper::EEncodingOptions::AutoDetect, &IFileManager::Get(), \
			GEngine->AddOnScreenDebugMessage(-1, 5.0f, FColor::Red, TEXT("File save failed."));
		else GEngine->AddOnScreenDebugMessage(-1, 1.0f, FColor::Green, TEXT("File save succeeded."));

(4) Finally, use python read the image as example: