The ComputeLightGrid shader(s) seem to get bogged down during resolution changes.
I’m using a patch from John Alcatraz to allow viewport resizing without render target reallocation on SteamVR (similar to adaptive pd on Oculus). But I was getting big hitches with it when I change resolutions.
The hitches show up as “other” in the GPU part of the performance graph, which I read can come from compute shaders:
What I’m doing there is slightly increasing the resolution each frame by around .1% and then walking it back down, over and over.
I ran the profiler and tracked it down to STAT_ComputeLightGrid, which runs in FDeferredShadingSceneRenderer::ComputeLightGrid
I looked around at the cvars and found r.Forward.LightGridPixelSize. When I bump that up to around 512 (it defaults to 64), all the hitches stop and the “other” part of the GPU performance graph no longer has any spikes. I can freely change resolution every frame with no hitch.
Any thoughts on changes I can make to avoid this, while still having a usable light grid (512 pixels means there are only a few cells)? Is it just that the first initialization of the light grid is much more expensive than subsequent updates? Would it be possible to base it on just a fixed number of divisions instead of pixels, so that the grid size doesn’t change with resolution (maybe rounding would mean slightly different cells end up overlapped)?
I suspect this affects Oculus adaptive pixel density as well, but I haven’t verified it there yet.