Adaptive-Blended TAA, a tiny magic for your sharp and responsive scenes


Just decided that i too will make a post to show you a tiny little trick with the shader code of TAA, that can make it look sharp and responsive, without the obvious artifacts of trembling, noisy pixels and wandering jaggies. Well, it’s a convinient lie you will witness here, and i obviously cannot make wonders with only a single line of code change, but at least it is easy to implement, can be applied to any engine build whether its a source or launcher build, and you can shape it’s behavior to fit better to your scenes. Once you’re done with it, will only take a moment of your time to remove from there, and you can go back to the old screenies. So why not?


Here is the file for 4.17 and i just marked the line 566 for you where i would put this stuff:

BlendFinal = clamp( BlendFinal + sqrt(sqrt(distance(BackN.xy, UV.xy))) * 1.0f , BlendFinal , 1.0f );

Now, add this line of code there, then press in editor the ctrl+shift+. (dot) combo. It will recompile the shaders and the TAA will have the adaptive blend logic implemented.



This little code have became part of a sharpening solution and a pull request of the talented @hallatore , since he found it’s generally aids in the control of the blurring on the scenes upon camera movement. Nice job, just check out the topic for further informations regarding the new post process effect and how to use it properly! Well done!


First and foremost i will state that nothing is wrong with Epic’s TAA. You disagree with me, and that’s okay, i too am can’t find it exceptionally flawless but it is a very good implementation from Epic, and they have took great care of the code over numerous iterations in the last couple of years to make it work better for you. It’s something you can depend on, a friend that always work out for any of your situations, but not without the side effects. It ain’t perfect. It cannot be, but Epic have gave us a very good, alias free nice looking - almost photo realistic outputs for our joys. They did a very good job on this too, and i only tampering with it’s code because it does not always suit to all my needs.

Neither does fit yours. Right? That’s why you are here, so let’s get to it. Hereby in this video i will present you what can be the results of a tiny little change of the code, that will not only hide many of the well know problems, but makes your scenes look better (relatively speaking) without giving up too much on the stability and features. Yes, i’m just gonna hide the problems. It’s no solution to the problems itself, but we will just pick up the broom from mommys corner and pull everything under the carpet. That will happen here.

The video showcase

There will be 4 takes in the video, and they all showing the same results but with different contents to display some of the issues and results. I will switch between the settings so always look at my hands!

1st scene is the noise demo, a kind of sadistic-synthetic pleasure for your torments (and i want to show you youtubes flaws in the video quality as well). The noisy environment pronounces the best how TAA leaves most of your scenes with strong and ugly ghost remainders behind the moving objects, burning in the previous history information and they just won’t disappear. By default this blending happens over the period of 25+ frames, but to make them to take longer i will set the scene frame rate to be 15 fps, so it will take about 2 seconds for them burn-ins to kind of disappear. With or without motion blur, they remain to be there, for a veeeery looong period of time.

Then i apply this magic console command i just implemented, called “r.TemporalAADynamicMode 1” which have a float input value of 1.0, and it will ignite the extra code added deep in the Temporal Anti-Aliasing shader to apply this adaptive blending on everywhere.

But what is the problem here with TAA? I can’t tell. I just made it disappear with a neat little trick. What basically will happen is that the wandering (moving) pixels will no longer burning in, but the blending weight will be set to be relatively small for all those moving pixels, so the antialias solution will be applied relatively weak on their remainders. They are moving, so they look jaggied and smeared already. I just remove them pesky bugs from the rest of the scene, and in turn the entire scene will become clean and sharp. Since it is based all on a quickly extrapolated velocity of the moving pixels (in screen space), therefore the static objects will retain their nice shape with the good amount of antialiasings on there. It will only affect the moving content. Really. Just see this for yourself. The spooky ghosts are gone, and the noise is less blurry, while the objects in the static background are nicely antialiased. The scene is now sharp and responsive, where it must be. Stable, smooth and shiny where it have to be. Cool balance, huh?

So yeah, thats enough of the noise tests, lets move to the Scene 2 (at 2:15) to the cherry trees from the free content of Infinity Blade: Grass Lands. It’s a cool package and i’m sure you’re familiar with it’s content, so there it is. Once i apply the adaptive mode, the trees and grass start to show some sharp and clean details, the flowers and leaves appears without this median blurred nightmare. I take a closer peek at a leaf and you will see it’s inner detail will no longer glued to the screen, but it moves nicely, once i take this magic velocity information into consideration at the pixel blendings.

Third scene (at 4:10), is the car scene. I will take a little time here to show you the problems with SSR as it is getting smeared all over the place, the TAA is also distorting it by bending them towards your viewport, and it can best seen on the reflections of the hills and the loop. Also the ghost shape is there because the road has became way too bright, and despite the brilliant Dynamic Anti Ghost solution of the Epic’s TAA, it will present an another problem on it’s own, by cutting those shadow areas on the bright and shiny road, hence the ghost shape appears. I also took a peek inside the car you can see its hull is burned into the road areas as well causing this ghost effect appear again.

Why don’t we make that disappear too, along with the distortions and everything? Poof. Magic. It’s gone. But here comes the expenses. The price of all this fancy trick. Did you know that those ugly smeared pixels you all hate are actually hiding something else? Specular Aliasing! Boom. There you will get a loads of it, since they are no longer hidden. This will be something you have to address later, but for now it will only show up under the worst circumstances, and they are only there for a moment while the shiny-thing is moving relative to the viewport. Once the movement will stop, the specular aliasing gets blended in again, and the problem is gone. Phew! Almost gone. Once you turn off this adaptive thingamajiggy, you will see the TAA is doesn’t quite takes care of the specular aliasing, as they usually pops up at the edges of the screen while you rotate or move the camera. Only for a moment, but they instantly gets smeared over in the next frames so they disappear. We can’t have this on the road, since they look awful. We want the specular aliasing there, thus showing those blazing fast speeds of our cars. Or maybe not?

Anyways, the specular aliasing i’m quite certain can be treated by many ways using other dark practices, like i could go with a little tinkering at the blending equation to exclude these high contrast areas. It probably would work out with HDR too, i did not tested this all. But let’s just leave something for the next day. Today is the big triumph of the brooms and carpets.

The final take (at 8:16) will be the jungle scene, it’s for your eyes and pleasure and because it looks very cool, i just love this scene from Dokyo. The leaves will be smeared first because the adaptive mode is always off, but once i enable it they became sharp and detailed. I also apply one extra sharpening pass on them which actually makes them a little jaggied and aliased, but oh well, youtube and it’s videos, i wasn’t sure how will it look at the end. (Likely it wont even show you half the fine details).

The how

Well you can take a good tour for yourself on how to modify the engine, and it worked out for me better to implement this console variable, so you can go that path too. But you don’t really have to and you might as well can just modify the PostProcessTemporalCommon.ush that will be enough. I will show you how to implement this stuff into 4.17 shaders but it can be done in other engine versions as well - it depends on which version you use, and might require some engineering (coder) work to figure out the right location and the implementation. But i’m quite certain there are ways to be added to both the Epic and Nvidia branches from very early versions to the future releases.

Here is the file for 4.17 and i just marked the line 566 for you where i would put this stuff:

Let’s just add this one single line of code.

BlendFinal = clamp( BlendFinal + sqrt(sqrt(distance(BackN.xy, UV.xy))) * 1.0f , BlendFinal , 1.0f );	

Once you’re done, save the file, and go back to your engine. Type in the console the “r.ShaderDevelopmentMode 1” (without the quotes) and press the enter. It will make you feel safe and everything. Now you can press the magic key combo that will recompile the shaders and makes these changes final for you.

Ctrl+Shift+. (dot). The compile process will start and finish in a moment. In case you happen to be missed this line and made a typo, the compiler will warn you that you should fix the code before continue. Do it, save it and then you can press the Retry button to try again and see what you did.

That’s all, test your scenes and much joy. I maybe propose this one for a pull request (it will unlikely make it to the engine), but i’m more eager to hear about your ideas regarding this and other changes/improvements. It’s okay if you find this code is awful, i know it is. That’s the best i can provide for today. For any other cases, please enjoy and profit!


You can modify the value at the velocity (* 1.0f) to be larger or smaller, to apply the effects more pronounced. I just rigged my console command to that value, and i use 2.0f sometimes in the video.

This value have effects on the specular aliasing as well. Smaller the value, less the specular issue will be visible, but more the ghosts will peekaboo. You can also add or remove sqrt()'s around this velocity stuff to better adjust to your scenes needs, the fourth square i use here kind of works out on most of my sceneries, but a single square root can be more sensitive to the movements so it might suit better for the slow moving scenes. I also have experienced better clarity with values around 1.5f but that maybe just me.

Other Tips

Some forum members have came up with settings for the Temporal AA parameters to change the behavior, reduce/extend the number of samples taken and the blend weight. Those changes can be used in harmony with this modification, but they may present different results so you have to adjust the numbers accordingly. Just blame me if you have troubles with that settings. It’s all on me.

Improvements for the Next Gens

Since this modification does not resolve any of the problems but just hides some of them under the carpet, it is fair to assume there are much better ways to approach any of these problems and find good solutions to them. I’m also reading the forums and have seen a few members have took little notes and side mentions of certain improvements here and there, which actually would be very nice of them to share in greater details with others to learn from. The way the engine is approaching the temporal anti-aliasing along the rest of the equations are not the only parts require more attention but the actual behavior perhaps go certain ways that is most definitely won’t apply to many users.

Now after this little training you are ready to open up the code again, check it’s content and you can learn a great deal of lecture from there, thanks not only the comments they put in there, but the simple writing of the code itself. I highly advise you to think about what you see in there, and you will just come up with great many ideas i’m sure of it. The developers at Epic are usually following the ways of quick and simple paths, to put litte stress on the cpu/gpu while sacrificing little of the quality. But they make sacrifices. Experiments are there too. You can find many corners to improve on the quality!

It’s very easy to iterate on the shaders, and it have became fast to recompile them so it’s not that big of a deal, really.

The sharpen

(I’m not sure i must address this, but i leave this here for now.)

No you should not apply sharpen to any of your scenes. It will remove the nice antialiasing and leaves you off with grandma’s staircase at the old crackling house, on all the edges. Spooky stuff indeed! But i can’t tell you, what you do. You will just go with the sharpening. Well, here is an idea, which i put in the postprocess after the tonemapper.

That’s kind of an adaptive sharpening at a simple form. This will exclude the edges, but apply the sharpening only on the fine molten details. You will make it to be distance aware, and only sharpen the close up content by using the depth information of your viewport. The close objects will become more pronounced, but if you overdo you will end up with a shading draw. Just don’t go there, it’s enough to apply some fine amounts of it.

There also is a tonemapper sharpening console command, nice implementation, you can try it; it will pronounce the edges and removes some antialiasing as well. No, i don’t recommend it.

Final words

Thanks for your attention. Any questions? Additions? Please share all your ideas how else you would improve on the TAA!

looks good! I’ll have to try it

thanks for the time spent on finding this out and also the long explanation

just one question about “r.TemporalAADynamicMode 1” -> seems like a much faster way to iterate to finding a suitable value. would be great if you could share this one as well :slight_smile:

Do I understand correctly that you adjust the blend based on the distance between this and the last frame for the pixel?
Need to test it out later :slight_smile:

What about moving objects when the camera stays still?

You can follow the lead of the CurrentFrameWeight in the engine source and replicate the same things, you will have a float value in the end. Or simply just reuse this value for the time being you testing this, tho that is not very elegant to do so.

An object moving in front of the camera is not handled by this case. There is the Dynamic Anti Ghosting that will take care of that. Only the character, but not it’s shadows and other rendering features. It may be a good idea to attach your camera to a character socket, because the cotinous camera movement is required here to have informations available for this equation. But why is that? At the very end of the TAA you can just debug this distance information in the OutColor to see what is happening.

I’ve been using nearly the same code (with an exception of using dot instead of distance and few minor extra things, like additionally biasing the blend towards current frame in the regions of highly varying depth,coupled with using 3x3 SNN filter and averaged depth instead of minimum depth in a cross sample for velocity) for quite a while now.

While in the essence, this tweak simply reduces strength of anti-aliasing, depending on pixel’s velocity, what Konflict offers is a very good all-round solution, especially for foliage-heavy scenes.

I ended up reducing the blend factor to 0.5f, as it seems to reduce jitter on very noise stuff.
There might be some stuff that is more visible by reducing this, but in my scene it looks rather nice.

BlendFinal = clamp( BlendFinal + sqrt(sqrt(distance(BackN.xy, UV.xy))) * 0.5f , BlendFinal , 1.0f );

Here is from a moving camera. :wink:


I think you should hook it up to a console variable like r.TemporalAAMovementBlurReduction and let the default value be 0.
Then submit a pull request with something like this! :wink:

BlendFinal = clamp( BlendFinal + sqrt(sqrt(distance(BackN.xy, UV.xy))) * MovementBlurReductionAmount , BlendFinal , 1.0f );

Just made a branch with your changes. Mostly so I have it around :slight_smile:

I was wondering if you can do the same trick I do. If you blend with the contrast of HistoryColor and you can reduce blur based on the difference in contrast change.

Excellent job! I’m happy you have found this is useful too!

It is a very neat idea, and was among the first things i was trying to improve on the quality, but it happen to have a very grim side effect on the ghosting. Once you move your camera, the excluded luminance informations will result in a very low mask information for those areas, thus it results in a “hole cutting” effect, and the ghosts get’s reduced to some degree, but their halo will remain to be very apparent.

While i was looking for a solution to this problem i actually have found a very neat luminance extraction trick in the temporal aa’s code, which is not used by the main aa pass but some of the less interesting ones only.

min(abs(LumaMin-LumaHistory), abs(LumaMax-LumaHistory))

This will result a so very thin line around the object that it helps to preserve the antialiased edges, but once you move your camera the history information will result in high mask areas. With a little exaggeration of its effect the result look very promising.

(The applied mask can be seen on the right side of the viewport)

As usual, the video quality is terrible, but the resulting mask is quite useful to reduce the bluring AND it provides very strong shape masks for the ghost reduction, all in one equation. But it’s not good enough, and there are cases where the resulting ghost mask will actually paints a bit of a ghost on it’s own because the extracted luma information gets mixed together with the background, and it essentially results in a new ghosting artifact. I fine tuned this in the video to reduce it’s side-effects but i found this equation not good enough to all circumstances. It helps, but not enough. But still it is a very neat little trick i recommend you to look into this!

As i have mentioned this before, a more robust anti ghosting solution is on my desk currently i’m working on, which basically extract the deghost mask from the depth differences by using a custom temporal buffer to store and retrieve it. A bit complicated but the result is very effective against ghosts, but on the fine details it have medium effects only. It also does not take luma differences into the equation (since it is a depth only information) thus a combination of the both techiques is what i’m welding together, currently. The results are promising.

One thing that also bothers my mind is the specular aliasing issue which accidently can pop in on the deghosted areas. The current implementation of TAA also suffer of this error, but that does not take me away to looking for a solution to that. The idea would be a very strong negative value that is introduced to the depth difference equation, in order to reduce it’s effectiveness on the areas where the shiny things are gleaming in.

These i found very interesting and gave me the thinking of the why’s and hows, which would be nice of you if you can tell some more details regarding these “minor” improvements. :slight_smile: They sound very interesting! There is also the choice of the SNN filter and why did you go with it, since it’s characteristic is seems produce less accurate reconstruction of the overall content, and to me it looks like prone to result in invalid pixel informations (because the mean), that can lead to aliasing perhaps. There are other techiques exists as well, like the bicubic in 5 taps that later also might be more efficient (since the less sampling requirement), and there is an implementation for it exists in the TAA code, tho it would require a bilinear sampler for the depth to work properly. I also understand the edges can be smoothed out which at first sounds like a terrible idea, but as to calculate some accurate velocity informations, that doesn’t actually sound that bad at all. Also, what are your experiences with the differences between the use of averaged vs min depth for the calculations? Doesn’t that just help to separate background content from the front, in which case it would result in better edges if you continue to use the min?

Maybe i’m a bit lost on these changes you have mentioned, but i’m sure i can offer you further improvements as you are very interested in calculations and quality. One thing for sure is, that every time you make a calculation on a float value, you will end up with precision degradation which comes from the nature of floats being stored (exp). The lower value ranges (0-0.5) usually have higher resolution compared to the (.5-1) range, which is very silly because it actually means a night shot have better overall quality compared to a daylight shot. There would be an improvement for the engine to inverse the input image of the postprocessing pipeline before working on them to have a tiny bit better quality at the end. But it will still cause minor degradation and the only solution to this problem is to raise the bit depth of the context.

I actually have found a console command for this case (which was surprising at first), r.PostProcessingColorFormat 1 will cause to change your 16bit channels to be 32bits, resulting in an PF_A32B32G32R32F texture format for the entire postprocess pipeline. I believe that temporal AA would benefit the most since it is reusing it’s previous history content, and the processing of that content is slowly losing it’s quality over numerous frames, therefore the samples around frame 5-8 have less quality in the end compared to the first 4 shots.

The choice was influenced by grass-heavy scenes and a set task to reduce smearing between multiple layers of grass, while still keeping maximum anti aliasing in place.

The aim was to stop(or minimize) velocity offset jumping off from the grass layer, original pixel belongs to.

Overall result is that foreground layer of foliage ends up being more blurry, as compared to using sample with min depth for velocity, but layers behind it are preserved better. Picture is less sharp, but you can visually distinguish individual leaves in tree canopy, which would otherwise turn into smeared mess in motion. Brings up issue, where far scenery, looked at through low number of up-close foliage layers, has too high velocity error. Cured with rejecting samples, which depth difference from the center pixel exceeds set threshold.

Conclusively, I’d say that it is probably not worth extra efforts, as these 4 extra depth taps would be better placed elsewhere.

In a spare time I’m going to look into adaptively changing neighborhood filter shape based on velocity.

According to this sample image only by visual examination i can see the very strong misplaced pixels that theoretically should result in the opposite effect, and cause hard jumpings and invalid movement informations. How does this actually works out for you then? There might be other effects that works in harmony with this noisy result thus reducing the blurring artifacts perhaps? (think the dither pattern and it’s easing effect on the pixel accumulations)

I actually been tried to average the depth measurements, but maybe i did it wrong.

PosN.z = ( ( PosN.z + (Depths.x + Depths.y + Depths.z + Depths.w) ) / 5.f );

The results of this code line cause aliasing artifacts and running jaggies. What is your approach regarding the average? :slight_smile:

I was unable to produce this effect, but i will repeat my experiments regarding this average once you confirm the code should work, since it might have something to do with other changes i have done the same time in the TAA that could be the cause of the unexpected results.

Makes sense, and i’ll look into this too! Thanks for the mention.

Does this actually mean you have set up the full 9 sample kernel then for the evaluation? I don’t find this too much of a trouble as for the timing issues, gpu’s are tend to be very good on sampling efficiency :slight_smile:

That is going to be a huge effort i believe, since an interpolated offset in the measurement coordinates would probably require the weights to be adjusted as well. The sample weights to correct the measurements are generated by the engine on cpu before the execution of the TAA so that would probably result in a requirement to modify the relevant cpp code as well. Altho you can perhaps get away with the Texture2DSampleBicubic method in shaders that should calculate your the correct weights. I’m just not so sure it would work with offsetted UV’s. Does it?

I have uploaded this little video to showcase my attempt when i went to hunt some ghosts. I am using a very obscure combination of many experiments to clean up these areas, yet the shadows are still remain to be visible since i have found no way to mask them properly (not from a postprocess pass?). It’s not that epic’s dynamic anti ghost would do anything about these shadows, as their solution is takes care the remainders of the character mesh and some dynamic objects only. I just hope there is a way to remove the shadow spots as well, and in case anyone reads this and have an idea, please you can share all your brilliant thoughts here. :slight_smile:

This presentation is pretty good. Temporal Antialiasing in Uncharted 4 -

These are basically in ue4 too already, eg the dynamic anti ghost is does just the same, as well as the history clamping is here, but none of these techniques talk about issues with shadows. The problem with shadows only pops up once you have very dense (eg noisy) textures that where the clamping will fail. This actually is mentioned in this ppt too at the ghosting section, which insipred them for the stencil buffer, but that only covers the character mesh nothing about it’s shadow (just like the dynamic anti ghost).

The video i show here i actually turned this feature off in the TAA since the depth differences solve the problem just as well, and won’t require additional rendering of these objects either, but handles all objects on the scene that have any depth information rendered (including foliage, static objects etc) which makes it a more efficient and viable replacement. Combining the two techniques would be possible but i have yet to seen a case where it would help anything.

Thanks for bringing this ppt up it is indeed a nice reading.

Are the shadows a problem in normal game scenarios? The noise material is good to provoke artifacts, but it’s a bit more noisy than what you would normally see.

The closest thing I know about are maybe foliage or concrete walls?

Yes the concrete walls are prone to produce similar issues, as well as sand, snow and caves could be affected too under given circumstances. Foliage is less of an issue actually because the local (3x3) area would likely guide the clamping well enough.

As you can see on this image, the problem wont come up always but under dire circumstances only :slight_smile:


It require a sharp eye to see what is going on, but good news to all users of the engine is on the horizon, that mostly will target the temporal anti aliasing pp quality wise. Funny they decided to apply a velocity on the blending that will reduce the blur. Brilliant idea, isn’t it? Wish i have think of this before. Oh wait…! :smiley:

Let’s just hope for the best, and all this works out at the end that’s the only thing matters. I believe the new changes will make this tiny code in ny first post obsolete in the next engine versions. It maybe is possible to port most of these changes to older engine versions as to make them look better! By the look of the commented code lines and the fact they did not announced the modifications (so far) i think we can expect more changes in the near future as it have became a priority for the rendering team to work on this code. Nice job!

I just wish the brilliant pull request of @hallatore would been pulled before the changes… Maybe later!

@Konflict: I Played around with blurring the output when you move.

I’m on the latest branch, so MovementFactor might differ. I put it right at the bottom above the #undef stuff.

    HistoryColor.rgb = YCoCgToRGB(;
    Neighbors[1].rgb = YCoCgToRGB(Neighbors[1].xyz);
    Neighbors[3].rgb = YCoCgToRGB(Neighbors[3].xyz);
    Neighbors[4].rgb = YCoCgToRGB(Neighbors[4].xyz);
    Neighbors[5].rgb = YCoCgToRGB(Neighbors[5].xyz);
    Neighbors[7].rgb = YCoCgToRGB(Neighbors[7].xyz);

//float MovementFactor = saturate(sqrt(sqrt(distance(BackN.xy, UV.xy) / 20)));
float MovementFactor = saturate( Velocity / 20);
float4 BlurredColor = (Neighbors[1] + Neighbors[3] + Neighbors[5] + Neighbors[7] + (Neighbors[4] * 4) + (HistoryColor * 4)) / 12;
OutColor.rgb = lerp(OutColor.rgb, BlurredColor.rgb, MovementFactor);