Capturing Depth from Unreal Engine

I have been trying to get per pixel ground-truth depth data for a scene from Unreal Engine.

I explored the buffer visualization modes namely SceneDepth and SceneDepthWorldUnits. I used the High Resolution Screenshot feature to export these buffers into exr format.
I realized that SceneDepth is normalized between 0-1 which isn’t helpful for my use case and it also had the fractals issue.
On converting the SceneDepthWorldUnits exr file to other formats like pfm and png, I could verify that this was indeed depth in absolute units.

But there were a few issues here. Certain textures in the scene were missing from the depth map. After some experimentation and clarification with the documentation, I realized that this is because of the translucent nature of these meshes. As SceneDepth is calculated exclusively for non translucent meshes due to the internal rendering passes order of Unreal, there were these missing textures in the depth maps.

Hoping to workaround this internal rendering order where Scene Depth was calculated before translucency was added, I tried calculating depth as a Post Processing effect by using Scene Depth node. I noticed that the issue persists even in this case.

It would be really helpful if any of you could suggest a workaround to get the depth map of the entire scene (including translucent and non translucent meshes) and export it in some format.
I am a beginner in Unreal Engine and it is possible that I might have missed out on something. So any suggestion would be greatly appreciated.

Thank you in advance.

One way to get a completely accurate depth would be to create a shader that translates the depth into an RGBA value (so it can be 32bits) and force every mesh in the scene to use that shader - that’s kind of what the visualizations do anyway.

You could also force all meshes materials to be opaque and use the SceneDepth, and then switch them back after.

Both these methods would require enumerating all meshes and their textures though, so would require a bit of learning.

Yes, the second suggestion was something I had considered.
As the translucency of the mesh is not something that would affect the depth map, the mesh need not retain that property for depth calculations.

But like you mentioned, this will be a tedious process where we’ll have to go through all the translucent meshes, make them opaque and then switch them back to get the accurate RGB Scene. This won’t be feasible and scalable in my case as I am working with very large scenes with a large number of meshes involved.

I wanted to know if there was any alternative to this cumbersome process of changing the property of meshes individually.