I was looking at the code responsible for drawing light functions (LightFunctionRendering.cpp)
if (((FVector)View.ViewMatrices.GetViewOrigin() - LightBounds.Center).SizeSquared() < FMath::Square(LightBounds.W * 1.05f + View.NearClippingDistance * 2.0f)) { // Render backfaces with depth tests disabled since the camera is inside (or close to inside) the light function geometry GraphicsPSOInit.RasterizerState = View.bReverseCulling ? TStaticRasterizerState<FM_Solid, CM_CW>::GetRHI() : TStaticRasterizerState<FM_Solid, CM_CCW>::GetRHI(); } else { // Render frontfaces with depth tests on to get the speedup from HiZ since the camera is outside the light function geometry GraphicsPSOInit.DepthStencilState = TStaticDepthStencilState<false, CF_DepthNearOrEqual>::GetRHI(); GraphicsPSOInit.RasterizerState = View.bReverseCulling ? TStaticRasterizerState<FM_Solid, CM_CCW>::GetRHI() : TStaticRasterizerState<FM_Solid, CM_CW>::GetRHI(); }and had a couple of questions:
Why is depth testing disabled when drawing backfaces? Wouldn’t it make sense to use `CF_DepthFartherOrEqual` in these cases?
Why is `View.NearClippingDistance` multiplied by 2? I understand that a conservative test is probably better in this case, but I’m not sure this is the reason.
Also related: why does `StencilingGeometry::DrawCone` expect vertices to be “generated” inside the vertex shader instead of using a vertex buffer?
thanks for reaching out. I will try to answer your questions as best as I can.
Why is depth testing disabled when drawing backfaces? Wouldn’t it make sense to use `CF_DepthFartherOrEqual` in these cases?
If depth testing is enabled for backfaces, they might erroneously pass the depth test and be drawn on top of front-facing objects, leading to visual artifacts or incorrect rendering
In the case of using CF_DepthFartherOrEqual**,** fragments that are further from the camera would be rendered over fragments that are nearer, which in my understanding would overwrite the backfaces of the light function’s geometry when there are other objects that need to be drawn. I’m not entirely sure about this, and I can forward your question to Epic if desired.
Why is `View.NearClippingDistance` multiplied by 2? I understand that a conservative test is probably better in this case, but I’m not sure this is the reason.
By doubling the near clipping distance, the code ensures that the proximity check remains robust even under edge cases and avoids precision issues due to floating-point inaccuracies when the camera is very close to or intersecting with the light geometry. This additional padding reduces the chance of rendering artifacts or incorrectly disabling depth tests.
Also related: why does `StencilingGeometry::DrawCone` expect vertices to be “generated” inside the vertex shader instead of using a vertex buffer?
This is done for performance optimization and flexibility: when using lightweight, procedural geometry like cones, where the vertex positions can be calculated mathematically, generating vertices dynamically in the vertex shader can be faster than transferring precomputed vertex data from the CPU to the GPU and avoids the need of managing and storing vertex buffers in memory. That way, parameters like the cone’s radius, height, or tessellation level can be dynamically adjusted without needing to recreate or update a vertex buffer.
Hopefully this helps. Please let me know if you have further questions.
Indeed I believe that in this case we could enable CF_DepthFartherOrEqual. We would not benefit from HiZ which is build for CF_DepthNearOrEqual, but we would benefit from EarlyZ if valid. I will have to measure on console.
That being said, if you want fast and efficient light functions that can be colored and work on many surface types (translucent, water, volumetrics), I would recommend you to check the Light Function Atlas. In this case, no need for stenciling volume: all light function at generated in parallel (drawback low resolution) but all can be rendered with all light batched in parallel and in many scenarios.
> If depth testing is enabled for backfaces, they might erroneously pass the depth test and be drawn on top of front-facing objects
I think this is what we want: since the light function should only affect pixels inside its geometry, and we are inside the light function geometry, the pixels that should be affected are those whose depth is closer to the camera (hence we only render the light function geometry if it is farther).
Looking at the code, it seems depth testing is also disabled when rendering backfaces for the lighting pass. However, enabling it could be an interesting optimization, but I would like to be sure I’m not overlooking anything. Could you please forward this to Epic?
Thanks again for the other answers, they were very helpful!