Custom depth and large scenes

We are trying to render a translucent material with “Disable Depth Test = true” which should be rendered behind most objects. But behind some objects, it should not be rendered. We render these objects in the custom depth texture and created a material function which we added to the translucent material to check if the current pixel is behind a “truly” opaque object:

[Image Removed]

This works very nicely, but if the translucent objects moves very far away, it just disappears. Important to note here is that we work with “real world” scales/distances (we are trying to render the world globe in real scale). Does the custom depth texture have lower precision? or could the issue be that the default value is not the max float?

Steps to Reproduce

  1. Create new Unreal Project
  2. Delete all landscape/fog/atmosphere actors from default level (basically everything except for SkyLight & DirectionalLight)
  3. Add cube to scene set transform to Location (X=-750000086.000000,Y=0.000000,Z=90420.000000) Rotation (Pitch=0.000000,Yaw=45.000000,Roll=0.000000) Scale (X=10000000.000000,Y=10000000.000000,Z=10000000.000000)
  4. Create new tranlucent material with [Image Removed]
  5. Assign translucent material to cube
  6. Rotate camera towards cube and notice that part of it are not rendered correctly[Image Removed]

Hi,

You’re correct in saying that in general a precision issue occurs due to large scales, as distant objects have less floating point precision.

According to this thread CustomDepth defaults to roughly 100,000,000 wherever there are no objects with Render CustomDepth Pass enabled.

So the combination of precision loss on the large scale, and the value defaulting to not max float most likely is the culprit.

Because of this, pixeldepth that goes > ~100,000,000 units results in it disappearing.

A couple things could be tried to remedy this, adding a bias to custom depth, or checking to see if custom depth = 100,000,000 and filtering it out.

Another option could be to potentially use Custom Stencils, but I’m unsure if they would fit your project’s needs.

Please let me know what you think, thank you!

Regards

A billion years ago this code was added to CreateInvDeviceZToWorldZTransform() in SceneView.cpp

// Subtract a tiny number to avoid divide by 0 errors in the shader when a very far distance is decided from the depth buffer.
		// This fixes fog not being applied to the black background in the editor.
		SubtractValue -= 0.00000001f;

This makes it to where the View_InvDeviceZToWorldZTransform used for the calculations looks like this in the shader

View_InvDeviceZToWorldZTransform	{ 0, 0, 0.1, -1e-08 }	float4
        C0	0	float
        C1	0	float
        C2	0.1	float
        C3	-1e-08	float

The decompiled shader code for the Material above (thank you PIX, r.shader.symbols=1, r.shaders.optimize=0, r.Shaders.RemoveDeadCode=1) looks like this (with bits stripped out)

// Get the pixel depth for input A
        float Local1 = GetPixelDepth(Parameters);
	float Local2 =   Local1 .r;
	// Get the depth for input B from the custom depth texture which defaults to all zeros)
	float4 Local3 = SceneTextureLookup(GetDefaultSceneTextureUV(Parameters, 13), 13, false);
	// Make opaque where the scene texture depth is greater than the pixel depth
	float Local4 =  select_internal( (  Local2  >= Local3.rgba.r) , 0.00000000f , 1.00000000f );
        PixelMaterialInputs.Opacity = Local4;

SceneTextureLookup() calls CalcSceneCustomDepth() which looks like this

float CalcSceneCustomDepth(float2 ScreenUV)
{
	return ConvertFromDeviceZ(Texture2DSampleLevel(TranslucentBasePass_SceneTextures_CustomDepthTexture, TranslucentBasePass_SceneTextures_PointClampSampler, ScreenUV, 0).r);
}

And ConvertFromDeviceZ is where View_InvDeviceZToWorldZTransform is used

float ConvertFromDeviceZ(float DeviceZ)
{
	return DeviceZ * View_InvDeviceZToWorldZTransform[0] + View_InvDeviceZToWorldZTransform[1] + 1.0f / (DeviceZ * View_InvDeviceZToWorldZTransform[2] - View_InvDeviceZToWorldZTransform[3]);
}

And that is why you don’t get the full depth range (fog and potential divide by zero errors).

If you comment out the code in CreateInvDeviceZToWorldZTransform() in SceneView.cpp that modifies SubtractValue and recompile, then you won’t have this limitation and will see the entire red cube in all its glory.

However, I’m not advocating you do that just yet, there’s likely a better way ™ that uses the large world coordinates functionality to achieve what you want and I’m consulting with my colleagues on that.

A large world coord solution doesn’t currently exist for this. Without changing the epsilon CreateInvDeviceZToWorldZTransform() you’ll be giving up some precision. A couple ideas are to implement your own version of the ConvertFromDeviceZ() that ignores View_InvDeviceZToWorldZTransform[3] and use that in a Custom HLSL node, or run the PixelDepth through something like this in CustomHLSL node (not tested) to introduce the epsilon there as well.

return ConvertFromDeviceZ(ConvertToDeviceZ(PixelDepth));Neither is a very elegant solution.

I’m unaware of any plans to change this behavior currently but I’ve created the following issue for tracking which should be visible publicly soon

Hi John,

Is there a reason why Custom Depth defaults to ~100,000,000 and not a bigger float? I assume checking for 100,000,000 doesn’t work with equals because the value is not exactly 100,000,000. We would like to use the material function also for bigger values, the rest oft the rendering seems to work fine with big scales without any z fighting (so the normal depth buffer seems to support really big values).

If the custom depth problem can’t be fixed, we will probably have to switch to custom stencils, but this would also hide the translucent material if it is in front of the opaque material which is not optimal.

Unfortunately I’m not sure on why it goes to about 100,000,000.

You’re mostly correct in saying checking for 100,000,000 wouldn’t work with equals, after testing it a bit custom depth == value returns true when value goes above 100,000,000.

I.e. custom depth == 300,000,000 also returns true.

I’m unfamiliar with the depth of this part of the engine so I’ll reassign the case to Epic so they can provide more information on custom depth choices.

Thank you for your patience!

Thanks a lot for the detailed reply. If possible, we would like to use a fix which works withe the precompiled unreal version.

Let me know when you get an answer from your colleagues, maybe just using a smaller value (float/double epsilon) instead of 0.00000001f would already fix the current problem for us.

Thanks for the ideas! We will definitely try this. Will this be fixed/improved in future Unreal version?