JPEG can make 10x - 15x compression without noticeable changes, and is at most 5x - 8x for the same quality, so not that impressive for image compression.
You raise a good point about the validity of the Gbuffer as an estimator of illumination variance, and that’s something I want to test. Will see if I manage to get a test app working tomorrow.
I do plan to add the pixels from the prev frame on the reconstruction stage, so you reconstruct with the ones traced frame, and the ones of the prev frame, to increase samples and increase temporal coherence.
Also, one could do a quick SSGI (maybe even just SSAO) pass, at say 1/4 or even 1/8 resolution, that should provide a much accurate estimator to select the pixels.
Another option is going multi-pass, by generating a relatively small amount of trace samples (based on the gbuffer), tracing, analyze the trace image and generate new sampling points for the areas that show the highest variation. may be the best option actually, just thought about it.
Of course, all is assuming low frequency GI, that’s a common assumption to make, and the one that allows to use some heavy blur (for example, I use two passes of a 13x13 depth-aware blur for AHR), but sure, as you say, it may break.