Realtime Dynamic GI + Reflections + AO + Emissive - AHR

In case the g-buffer is being reconstructed based on the most important and high frequency changes in the g-buffer/viewspace. Which is not the same thing as the frequency changes in lightspace, which is something that isn’t known until its traced. The viewspace varies on the edges you detected true, but that’s no indication that lightspace will (or rather is a poor indicator, back to that in a second). But assumptions can be made, specifically that lighting samples applied to neighboring worldspace pixels will be similar. is why it’s not to much of a stretch to downsample and then screenspace raytrace.

The same should apply to any raytracing. Specifically one could take a voxel like structure of the screen, cascaded so the voxel size goes up as depth increases (thus reducing sampling rate as scene complexity increases with depth). The voxels would otherwise be of equal size in all dimensions, depth as well as x and y, thus essentially downsampling the screen in 3 dimensions instead of 2. Then from the center of each voxel trace and apply the results from that to the entire g-buffer portion that the voxel contains. Blurring or otherwise combing contributions from neighboring voxels would be needed to ensure a smooth change in lighting. Another interesting result would be valid temporal gathering over time, as you could essentially gather light in worldspace and since you know where each worldspace voxel is just keep results from previous frames and keep re-using/adding to them.

The downsides to is possibly dramatic lighting changes for geometry that comes in without having samples from previous frames. For current that temporal upsampling is usually applied to is relatively minor, such as shadow filtering and screenspace raytracing, and so can be hid by the fact that the new geometry would be a bit motion blurred for a frame anyway. But large contribution diffuse lighting might make it unusable, or at least unusuable for progressive sampling over a lot of frames. Still, the idea of worldspace downsampling seems valid.

Regardless what’s presented is essentially a 2d image compression algorithm. And possibly an impressive one, what’s the compression ratio of your input compared the original? Anyway, in trying to only sample the pixels as you’ve done you’d get dramatic and temporally incoherent lighting pops as light sampling skips large sections of the screen. EG a large, high frequency luminance change might be totally valid in the middle portions of her dress, but since you’re not sampling from those positions you’d not see it at all until it hits a valid pixel and then POP! a dramatic lighting change happens. Or rather, indirect lighting can be as high frequency as direct lighting (just not as often) it wouldn’t be valid to sample shadow maps from your “important” pixels, as large sections would pop in and out of shadows coherently.