Access depth buffer (and scene color) from ISceneViewExtension::PostRenderBasePassMobile_RenderThread

I am attempting to add a custom full-screen post-processing effect, a simple outline around opaque objects based on sampling the scene depth. Nothing crazy, except that it must be rendered before transluscent objects and fog, so that the outlines are fogged with all the opaque objects and don’t render over particles and other transluscent things. This means the easy way of using a PostProcessMaterial won’t work, as these only function after all opaque and transluscent rendering have finished.

I’m also trying to do this without modifying any engine code (though this is not a hard requirement), and the ‘sanctioned’ way seems to be to use a SceneViewExtension. It’s a simple edge detection based on depth, for which I need access to the depth buffer, and eventually read/write access to the scene color render target.

The catch here is that I need this to work on mobile, using the mobile forward renderer. I have something currently that is working perfectly fine in the editor, when previewing in Android Vulkan mode. However, on device I hit an ensure that seems related to some unexpected issue with the scene textures’ usage in the shader, and although the shader still runs, any sampling of the depth buffer seems to just gives me back 0s. (that said, returning a flat color, for example, produces the expected result, so I know at least writing back out to the render target is working just fine)

I’m almost certain the issue is either:

  1. The depth buffer is not actually available at this point in the render pipeline.
  2. My cpp code that sets up the shader parameters isn’t correct.

Hopefully it’s the latter - there’s very little documentation for SceneViewExtensions, and pretty much nothing if you’re trying to use any of the callbacks besides PrePostProcessPass_RenderThread. The callback I’m using doesn’t give me an FRDGBuilder or a direct reference to the SceneTextures, so it’s been a good bit of trial and error just to get anything working at all with just the FRHICmdList and FSceneView.

One thing I’ve noticed is that the `RenderForwardMultiPass` path is taken by the mobile forward renderer when simulating in the editor. On my test Android device, which is running Vulkan, the renderer takes the `RenderForwardSinglePass` path instead, shown below.

If I remove the SceneDepth parameter and just return a solid color, or a color defined by simple scalar / vector parameters I pass in, then everything works just fine. I’ve also noticed that if I send SceneColor, it comes up black, but I assume this is because I can’t read/write it in the same render pass and need to do a copy into a new render target first (though again, for some reason it works just fine in the editor when simulating).

Attached are some stripped-down versions of the relevant scene view extension and shader code. Hopefully I’m missing something very simple here.

`// FMobileSceneRenderer::RenderForwardSinglePass, in MobileShadingRenderer.cpp
// …
// Depth pre-pass
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLM_MobilePrePass));
RenderMaskedPrePass(RHICmdList, View);
// Opaque and masked
RHICmdList.SetCurrentStat(GET_STATID(STAT_CLMM_Opaque));
RenderMobileBasePass(RHICmdList, View, &PassParameters->InstanceCullingDrawParams);
RenderMobileDebugView(RHICmdList, View);
RHICmdList.PollOcclusionQueries();
PostRenderBasePass(RHICmdList, View); // <— My scene view extension gets called in here
// … render decals, transluscent, fog

// Ensure I’m hitting when running on-device:
04-14 15:23:51.278 22679 22912 D UE : [2025.04.14-20.23.51:278][ 0]LogOutputDevice: Error: Ensure condition failed: Layout == VK_IMAGE_LAYOUT_READ_ONLY_OPTIMAL || Layout == VK_IMAGE_LAYOUT_SHARED_PRESENT_KHR || Layout == VK_IMAGE_LAYOUT_DEPTH_READ_ONLY_STENCIL_ATTACHMENT_OPTIMAL || Layout == VK_IMAGE_LAYOUT_DEPTH_ATTACHMENT_STENCIL_READ_ONLY_OPTIMAL || Layout == VK_IMAGE_LAYOUT_DEPTH_STENCIL_READ_ONLY_OPTIMAL || Layout == VK_IMAGE_LAYOUT_SHADER_READ_ONLY_OPTIMAL || Layout == VK_IMAGE_LAYOUT_GENERAL [File:Runtime/VulkanRHI/Private/VulkanDescriptorSets.h] [Line: 1084]`

Hi Giovanni,

You are correct in your assessment that this shader is trying to sample the depth buffer as a texture while it is the depth attachment to the active frame buffer. You should however be able to fetch the pixel’s depth using LookupDeviceZ(float2 screenUV) which performs a buffer fetch from either the depth buffer (if supported by the hardware) or from the depthAux which is a color target that mirrors the contents of the depth buffer in the forward renderer.

Best regards.

Hi Giovanni,

A depth prepass would address the issue since the depth texture would be resolved prior to sampling. However, it can be additional overhead. We continue to investigate a solution that wouldn’t require this. Are you able to share a full repro project under the original conditions? I’d want to ensure that we are integrating in the same manner.

Best regards.

Hi Giovanni,

Thanks for the example. The outline shader here differs from your previous MinimalShader as it would require a stored depth texture as it makes additional samples with offsets. Depth buffer fetching would only work for the current destination texel. The EarlyZ pass would indeed generate a stored texture prior to the color pass (iOS devices tile based GPUs and render targets cannot be sampled until the generating pass is complete and contents stored) making this approach functional, however the additional cost of EarlyZ may not be worth it. To avoid the need for EarlyZ, outlinetest would preferably setup a post process pass and render its screenspace effect there, which would be occur the base pass, ensuring that the depth texture is stored and current to the running frame rather than containing the stored results of the previous frame if sampled within the base pass. See https://dev.epicgames.com/community/learning/knowledge\-base/0ql6/unreal\-engine\-using\-sceneviewextension\-to\-extend\-the\-rendering\-system for an example of a post processing pass setup via sceneviewextension. If you need further assistance, don’t hesitate to reach out again.

Best regards.

Hi Giovanni,

Yes indeed, as a post effect occurring after translucents and fog, it would have severe artifacts. Under the current architecture, it appears your depth prepass approach is the only way to have access to the current scene’s depth as a texture. We are looking at alternatives to facilitate post processing between opaques and translucents. I’ll keep you apraised if this becomes available in case it would be a more performant option compared to the current depth prepass approach.

Best regards

Hi Stephane, thanks for the response.

I did manage to get _something_ using the DepthAux. My approach currently is to check if it’s valid, and use it if so, otherwise try to use just Depth. (I can’t use LookupDeviceZ as the SceneTexturesStruct doesn’t appear to be available in my shader, and I’m not sure how to get it in there). I haven’t yet solved not having a scene color to sample from, but for now I’m just rendering black or white based on detected edges.

It is sort of working now, but I’m getting some strange results -

  • The format of `DepthAux` is always PF_R16F, no matter what the format of `Depth` is. I’ve tried modifying the Android project settings to force a 24-bit or 32-bit depth buffer, to no effect. That said, although PF_R16F is not considered by the `IsDepthOrStencilFormat` method to be a valid depth format, if I just go ahead and use it anyway, then I can still sample it, and it seems to be the data I want. Except…
  • It appears to be the depth buffer contents from the previous frame. If I’m moving the view quickly enough, I can see the outlines lag behind where they should be, seemingly by a frame. If this is expected and `DepthAux` is intended to be the previous frame’s depth, then this will be a non-starter unfortunately. Naturally we don’t want to be running at a low framerate, but it’s inevitable some users will be running on lower-end phones, and seeing the outlines trail behind everything else will be no good.
  • On top of this, there seems to be some artifacts in the form of thin horizontal and vertical lines flashing on and off as I move the view around. If I just sample and output the depth itself, I don’t see these, but I’m not sure why my edge detection would cause those lines to appear (is it a precision thing?) It appears like a fixed grid of 5 rows and 2 columns, different parts of it flash on and off as I move the view around.
    • It doesn’t occur in the editor when I use Vulkan Preview (though there, I am able to use Depth and not DepthAux, which isn’t available).
    • This picture here is roughly what it looks like - the thick lines are the outlines of geometry I expect to see, then this grid of white thin lines appears over top as I’m moving the view. It never appears if I’m not moving.[Image Removed]

EDIT:

In addition to this, I went ahead and tried to force a full depth prepass, by adding this to AndroidEngine.ini:

```

[/Script/Engine.RendererSettings]

r.Mobile.EarlyZPass=1

```

and this seems to result in having the full, current frame’s depth buffer available at the end of opaque rendering. It’s not ideal, since we’re not using any of the other features you’d need a depth prepass for, but I’m getting somewhere at least.

The early z pass method is working, but we’re concerned about performance using it - in theory, we’re now drawing everything at least twice, even if the first draw is just a depth pass, is this expected to have large overhead?

My scene view extension is adding the full screen pass after opaque rendering, at this point the scene depth should be filled out. Is it possible to force that buffer to become available inside the scene view extension callback? Does an extra pass need to be added to make it available as a separate texture? If so, how do we do that given just the RHICommandList and View that the SceneViewExtension gets?

Hey Stephane,

I was able to create a small first-person-example project with the same shader setup for you to look at. The project is created using the vanilla UE 5.5.4 engine from the launcher, with Android build support added. I’ve verified that it works as expected in-editor and on an Android device (my test device is a Samsung Galaxy S20 FE 5G, US version). The depth prepass is enabled by setting `r.EarlyZPass` to 1 (opaque meshes only) in an AndroidEngine.ini file that I added to the project. The default value is 3 (decide automatically).

With the prepass enabled, the shader works as desired. If that file / setting change is removed, so that the setting reverts to ‘decide automatically’, on my device at least, the depth prepass is not used, and the shader is forced to use the Aux depth buffer, which is the copy of the previous frame’s depth buffer, as I mentioned before.

Hi Stephane,

We did initially start with using a post-process pass, and that does of course give us access to the current frame’s stored depth as we would like. It’s also much easier to implement than all this custom shader stuff :sweat_smile: But, the post-process pass occurs after all base pass rendering is complete (opaque + translucent + fog), so our outlines render over the top of all translucent objects and are unaffected by fog, which can look bad in scenes with more dense fog. This is the reason I want to get the depth buffer during base pass rendering. Or, to be specific, just after all opaque rendering is completed.

In the test project, this requirement is also the reason I stuck some translucent spheres in the scene - to demonstrate that with this method, the outlines do draw underneath translucent objects and before fog, which is what we want.

The fog is less of a problem, we could always just re-apply fog to the outlines if we really want that (or fade them by distance, much easier and what we ended up doing with the post-process version). But drawing underneath translucent objects is only possible if we render the outlines before translucent rendering occurs.

The SceneViewExtension’s base pass callback happens right when we need, in both the single and multi pass forward rendering paths, but I suppose because the base pass is not yet ‘completed’ at that point in the single-pass rendering, the depth texture hasn’t been stored yet and is therefore inaccessible. I’ll note again that this all does work in the editor when simulating mobile, because the editor is being forced through the ‘multi pass’ forward code path, while on device we are always going through with a single pass. I assume this would also work if we were doing multi-pass forward on-device, but there is probably some performance reason we wouldn’t want to do that?