According to this sample image only by visual examination i can see the very strong misplaced pixels that theoretically should result in the opposite effect, and cause hard jumpings and invalid movement informations. How does this actually works out for you then? There might be other effects that works in harmony with this noisy result thus reducing the blurring artifacts perhaps? (think the dither pattern and it’s easing effect on the pixel accumulations)
I actually been tried to average the depth measurements, but maybe i did it wrong.
PosN.z = ( ( PosN.z + (Depths.x + Depths.y + Depths.z + Depths.w) ) / 5.f );
The results of this code line cause aliasing artifacts and running jaggies. What is your approach regarding the average? ![]()
I was unable to produce this effect, but i will repeat my experiments regarding this average once you confirm the code should work, since it might have something to do with other changes i have done the same time in the TAA that could be the cause of the unexpected results.
Makes sense, and i’ll look into this too! Thanks for the mention.
Does this actually mean you have set up the full 9 sample kernel then for the evaluation? I don’t find this too much of a trouble as for the timing issues, gpu’s are tend to be very good on sampling efficiency ![]()
That is going to be a huge effort i believe, since an interpolated offset in the measurement coordinates would probably require the weights to be adjusted as well. The sample weights to correct the measurements are generated by the engine on cpu before the execution of the TAA so that would probably result in a requirement to modify the relevant cpp code as well. Altho you can perhaps get away with the Texture2DSampleBicubic method in shaders that should calculate your the correct weights. I’m just not so sure it would work with offsetted UV’s. Does it?