We require the ability to zoom in on distant objects using an extremely small FOV. With the standard 24-bit depth buffer (8-bit stencil) we get a lot of Z-fighting. However, building the engine with the define “DEPTH_32_BIT_CONVERSION=1” gives us enough extra precision to greatly reduce flickering in the scene.
Note: we intend to run only on high end Windows machines with latest graphics hardware.
Is it safe to use this option and what are the ramifications in doing so?
It looks like the engine will use a 64 bit depth-stencil format which has 32bit depth and 8 bit stencil if that is supported on your device as DXGI_FORMAT_D32_FLOAT_S8X24_UINT. This is a DX11 feature. In that case your memory for your depth buffer doubles but you have the same functionality. Everything else would be the same. You would lose stencil support on older cards that don’t support this feature.
Edit: It appears that stencil is not directly accessed by the user. You would have to write your own code to use stencil so it looks like you would not lose any user accessible functionality.
I am interested in using a 32bit depth buffer as well for my project. Where would the define “DEPTH_32_BIT_CONVERSION=1” go? I have the source downloaded and the project files generated. I just need to know where to put that define in.
We added the line in D3D11RHI.Build.cs to the class constructor D3D11RHI: