Translucent Materials behind Other Translucent Materials Ignore Refraction

I’m extremely confident it is parallax corrected cube maps, both because the technical artist behind the famous bottle shader used static reflection captures for the bottle refraction (if they had a dynamic capture, they would’ve used it instead), and you can see the probe transition lerp when moving between areas. Also the TV can be seen reflecting itself in it’s original position - something that would not happen with real-time capture.

But it is definitely a more advanced parallax corrected cubemap. Previous versions of source only had axis aligned bounding box corrected cubemaps, whereas this has a basic representation of the scene somehow. My guess is either it does capture depth only in real-time (possibly using a cheap proxy environment and not the real scene - think like the custom depth buffer in UE), or they have a distance field or some other crude assembly of primitives that can easily and cheaply be tested for intersections.

I think you could get opaque multibounce to work in theory.

You would need to fallback to something like the lumen surface cache or a cubemap pretty often though, because like you point out, if the ray only bends a little the final hit is mostly gonna be behind the opaque occluder - but if it bends too much it’ll be off screen.

All you need is a way to tell if the hit was against a refractive material and its normal vector. Since distance fields are volumetric, hit detection would not be blocked by occlusion.

If there is no screen space hit, the normal can easily be approximated by sampling the direction to the nearest surface, but determining if that hit was also refractive is trickier. In some cases it could probably be inferred - but to be sure I guess you’d need a bit in both the gbuffer and the surface cache to test in screen and world space. Easier said than done…