We are using a scene capture to capture depth values of specific primitives at full output resolution to use as input for a postprocessing material that runs after upscaling.
The scene capture is set to run as a custom pass in the main renderer for performance, but this causes the SceneTexture extent for the entire view family (calculated in FSceneRenderer::GetDesiredInternalBufferSize) to be the full output resolution, which causes every other rendering feature that is proportional to scene texture size (e.g. various Lumen textures) to also increase in size. This has been observed to increase memory use by a gigabyte with 4K display output and screen percentage around 70%.
All this despite only needing a single depth buffer of the output resolution.
Is this a known/expected outcome? If so, I think there should probably be a warning about it in the documentation for bIgnoreScreenPercentage or bRenderInMainRenderer.
I have been able to work around this with some engine modifications to
Ignore custom passes in GetDesiredInternalBufferSize
Swap out the SceneDepth texture for a larger one when necessary in the custom render pass rendering
Use the SceneDepth size rather than GetSceneTexturesConfig().Extent in CopySceneCaptureComponentToTarget
I am also looking in to the possibility of avoiding engine changes by doing a separate depth pass (including Nanite) in a SceneViewExtension if it can be done without too much extra work.
Steps to Reproduce
Create a Scene Capture Component 2D (Capture Source can be SceneDepth or probably anything else) with “Render in Main Renderer”, “Main View Resolution”, “Main View Camera” and Ignore Screen Percentage" enabled, with Main View Resolution Divisor (1,1). Set Screen percentage to <100% (e.g. r.ScreenPercentage 50). Observe texture/render target memory usage (e.g. stat rendertargetpool, rhi.dumpresourcememory Lumen) when it is set to capture every frame vs not capturing.
We’ve added support for the “Exclude from Scene Texture Extents” Scene Capture component flag on Custom Render Passes (which is what “Render in Main Renderer” generates) for 5.8, which will allow you to opt out of their resolution affecting the main view family’s resolution. In general, this will be a fairly harmless flag to enable on every capture, even in cases where the resolution matches the main view (with no screen percentage) or is smaller, meaning no memory hit for enabling it. In particular, if the main view family is bigger or the same size as the capture, it will use the main view family’s scene texture resolution for the capture anyway, and RDG can reuse the same resources between the Custom Render Pass and main view. Only when it’s larger will it potentially generate extra transient resources at the larger dimensions -- for depth only rendering (as in your case), it will just be the depth buffer. This does require a flag to be set, leaving the risk of unexpected memory increases -- I thought about adding CVars which would force set the flag for specific cases, but it complicates documenting the behavior of the component flag, so we decided against it for now.
In the meantime, your workaround changes to the engine sound reasonable.