Adaptive-Blended TAA, a tiny magic for your sharp and responsive scenes

The choice was influenced by grass-heavy scenes and a set task to reduce smearing between multiple layers of grass, while still keeping maximum anti aliasing in place.

The aim was to stop(or minimize) velocity offset jumping off from the grass layer, original pixel belongs to.

Overall result is that foreground layer of foliage ends up being more blurry, as compared to using sample with min depth for velocity, but layers behind it are preserved better. Picture is less sharp, but you can visually distinguish individual leaves in tree canopy, which would otherwise turn into smeared mess in motion. Brings up issue, where far scenery, looked at through low number of up-close foliage layers, has too high velocity error. Cured with rejecting samples, which depth difference from the center pixel exceeds set threshold.

Conclusively, I’d say that it is probably not worth extra efforts, as these 4 extra depth taps would be better placed elsewhere.

In a spare time I’m going to look into adaptively changing neighborhood filter shape based on velocity.