These i found very interesting and gave me the thinking of the why’s and hows, which would be nice of you if you can tell some more details regarding these “minor” improvements. They sound very interesting! There is also the choice of the SNN filter and why did you go with it, since it’s characteristic is seems produce less accurate reconstruction of the overall content, and to me it looks like prone to result in invalid pixel informations (because the mean), that can lead to aliasing perhaps. There are other techiques exists as well, like the bicubic in 5 taps that later also might be more efficient (since the less sampling requirement), and there is an implementation for it exists in the TAA code, tho it would require a bilinear sampler for the depth to work properly. I also understand the edges can be smoothed out which at first sounds like a terrible idea, but as to calculate some accurate velocity informations, that doesn’t actually sound that bad at all. Also, what are your experiences with the differences between the use of averaged vs min depth for the calculations? Doesn’t that just help to separate background content from the front, in which case it would result in better edges if you continue to use the min?
Maybe i’m a bit lost on these changes you have mentioned, but i’m sure i can offer you further improvements as you are very interested in calculations and quality. One thing for sure is, that every time you make a calculation on a float value, you will end up with precision degradation which comes from the nature of floats being stored (exp). The lower value ranges (0-0.5) usually have higher resolution compared to the (.5-1) range, which is very silly because it actually means a night shot have better overall quality compared to a daylight shot. There would be an improvement for the engine to inverse the input image of the postprocessing pipeline before working on them to have a tiny bit better quality at the end. But it will still cause minor degradation and the only solution to this problem is to raise the bit depth of the context.
I actually have found a console command for this case (which was surprising at first), r.PostProcessingColorFormat 1 will cause to change your 16bit channels to be 32bits, resulting in an PF_A32B32G32R32F texture format for the entire postprocess pipeline. I believe that temporal AA would benefit the most since it is reusing it’s previous history content, and the processing of that content is slowly losing it’s quality over numerous frames, therefore the samples around frame 5-8 have less quality in the end compared to the first 4 shots.