Can anyone explain what the SceneTexture:SceneDepth material node does?

I am trying to understand how the SceneTextureSceneDepth material node works in the following context:

A Scene Capture Component 2D is assigned this post process material in its rendering features. The purpose of the material is to determine if any objects are within the offset and if so they are masked in a corresponding render target.

I have no idea what the SceneTextureSceneDepth mat node actually does or what values it outputs. I have tried to understand its pseudo-code, but it still doesn’t make any sense.


// UMaterialExpressionSceneDepth
UMaterialExpressionSceneDepth::UMaterialExpressionSceneDepth(const FObjectInitializer& ObjectInitializer)
	: Super(ObjectInitializer)
	// Structure to hold one-time initialization
	struct FConstructorStatics
		FText NAME_Depth;
			: NAME_Depth(LOCTEXT( "Depth", "Depth" ))
	static FConstructorStatics ConstructorStatics;


	Outputs.Add(FExpressionOutput(TEXT(""), 1, 1, 0, 0, 0));
	bShaderInputData = true;
	ConstInput = FVector2D(0.f, 0.f);

void UMaterialExpressionSceneDepth::PostLoad()

		// Connect deprecated UV input to new expression input
		InputMode = EMaterialSceneAttributeInputMode::Coordinates;
		Input = Coordinates_DEPRECATED;

int32 UMaterialExpressionSceneDepth::Compile(class FMaterialCompiler* Compiler, int32 OutputIndex)
	int32 OffsetIndex = INDEX_NONE;
	int32 CoordinateIndex = INDEX_NONE;
	bool bUseOffset = false;

	if(InputMode == EMaterialSceneAttributeInputMode::OffsetFraction)
		if (Input.GetTracedInput().Expression)
			OffsetIndex = Input.Compile(Compiler);
			OffsetIndex = Compiler->Constant2(ConstInput.X, ConstInput.Y);
		bUseOffset = true;
	else if(InputMode == EMaterialSceneAttributeInputMode::Coordinates)
		if (Input.GetTracedInput().Expression)
			CoordinateIndex = Input.Compile(Compiler);

	int32 Result = Compiler->SceneDepth(OffsetIndex, CoordinateIndex, bUseOffset);
	return Result;

void UMaterialExpressionSceneDepth::GetCaption(TArray<FString>& OutCaptions) const
	OutCaptions.Add(TEXT("Scene Depth"));
#endif // WITH_EDITOR

FString UMaterialExpressionSceneDepth::GetInputName(int32 InputIndex) const
	if(InputIndex == 0)
		// Display the current InputMode enum's display name.
		UByteProperty* InputModeProperty = FindField<UByteProperty>( UMaterialExpressionSceneDepth::StaticClass(), "InputMode" );
		return InputModeProperty->Enum->GetNameStringByValue((int64)InputMode.GetValue());
	return TEXT("");

Any help would be much appreciated!

The scene depth gives you a per pixel value that represents the distance from the camera plane to a mesh in the scene. It’s not really the distance to the cameras origin though :wink:
alt text

You can see a grayscale image if you multiply the value propperly and just set it to the output for a postprocess e.g. Rendering out a scene depth pass - #2 by LMP3D - Rendering - Unreal Engine Forums

I also made a post about the difference of scene depth and actual distance to the camera in context to a water shader: Natural Depth through translucent Material - Community Content, Tools and Tutorials - Unreal Engine Forums

For the actual declatration, I have no idea :[
If you’d like to know more about the render process in genral How Unreal Renders a Frame – Interplay of Light
To my understanding the z-prepass is very simmilar but reversed. 1 is the closest to the cam.

Thank you for the reply BOB.

If willing, can you or anyone help me dissect the SceneTexture (ctrl+f, SceneTexture):SceneDepth node itself:


  • UVs: The UV input allows you to specify where you want to make a texture lookup (only used for the Color output).

What does “make a texture lookup” mean?


  • Color: The color output is a 4 channel output (actual channel assignment depends on the scene texture id).

I would assume the channels would be (R,G,B,A). Is that equivalent to (x,y,z,?)? Would you just mask the alpha channel because there aren’t 4 dimensions in Cartesian coordinates, thank God? What does (R,G,B,A) even tell you about a pixel location, those are just color values, right? Is R always equal to x-axis, G is always equal to y-axis, and B is always equal to the z-axis?

  • Size: Size is a 2 component vector with the width and height of the texture

  • InvSize: just 1/size

Regarding the post you referenced.
In the details panel of the post process material:
Post Process Material
Blendable Location = Replacing the Tonemapper

Can you explain the description for BL_ReplacingTonemapper and why you need to change it from BL_AfterTonemapping to that?

texture lookup

I’m not 100% certain, but as far as I heard of it, this is the process of loading a texture into the graphicscard memory (GDDR). UV are usually texture coordiantes (x,y->u,v) ranging from 0 to 1. You might see UVW for volumetric textures. So if you have a texture streched across your whole screen you’d just need to multiply the UV coordinate with your screen resolution the get the pixelposition. But most Textures are distorted by perspective on a 3d object.


R = red, G = green, B = blue, A= alpha. Textures from the render process just have alpha on 1, wich means fully opaque. I never used it, but I guess it’s useful for compositing, like Slate/UMG (User Interface stuff).


Yes, just the Pixel resolution. By the way: epic staff also programmed debug nodes to see these values rendered as numbers. Just search for debug when you add a node. These only work for pixel position independent values. If you would e.g. set the pixel colors as input, you would just see fragmentations of numbers.

in the screenshot it displays the preview window resolution.


This is specific to postprocessing materials. The tonemapper includes the exposure process. You can set the postprocess to happen before the tonemapper, after or you can replace it with your own function. If I’m not mistaking: If you set it before Tonemapper you get a higher range of color (16bit per channel?) to work with, instead of the usually mapped and cut of 8bit colors for standard monitors. I don’t know how this is implemented with HDR monitors.