Hello MadScorp,
The way Fast Approximate Anti-Aliasing(FXAA) works is this.
FXAA smooths edges in all pixels on the screen, including those inside alpha-blended textures and those resulting from pixel shader effects, which were previously immune to the effects of MSAA without oddball workarounds.
Version 3 of the FXAA algorithm takes about 1.3ms per frame on the standard low end video card. Earlier versions were found to be double the speed of 4x MSAA, so you’re looking at a modest 12 or 13 percent cost in framerate to enable FXAA - and in return you get a considerable reduction in aliasing.
The downside, is that you may see a bit of unwanted edge “reduction” inside textures or in other places.
Temporal Anti Aliasing works as follows.
Temporal AA uses a sub pixel jitter to the final MVP transformation matrix that alternates every frame - and combine two frames in post-effect style pass. This way temporal AA is able to increase the sampling resolution twice at almost no cost.
The result of such implementation looks perfect on still screenshots ( and you can implement it in a couple of hours on a high end project, but breaks in motion.) Previous frame pixels that correspond to current frame were in different positions. This can be easily modified by using motion vectors, but sometimes the information you are looking for was occluded. To address that, you cannot rely on depth (as the whole point of this technique is having extra coverage and edge information from the samples missing in the current frame, so you can try and rely on comparison of motion vector magnitudes to reject mismatching pixels.
So, in a nutshell, yes if switch between different methods of calculation for how anti aliasing is handled there will be a “huge” difference. Simply because they are a different ways of calculating the way the softening of your edges is handled.
In cel animation, animators can either add motion lines or create an object trail to give the impression of movement. To solve the wagon-wheel effect without changing the sampling rate or wheel speed, animators could add a broken or discolored spoke to force viewer’s visual system to make the correct connections between frames.
As I stated previously. I can zoom in and out on your project and produce the blurring of your ship, “not the trail behind you,” that in itself leads me to believe that this is not an issue with your post process while moving. You can zoom in to a distance, “in PIE,” where I see no blur.
This is a screenshot of your scene with the post process volume in your scene and I see no blur at this distance.
Here is a screenshot of your scene, (without the post processing volume, and without motion blur enabled to be seen inside of your game that I disabled through the viewport.) This is also in PIE mode.
The biggest thing I noticed while looking at your mesh is that you only have one LOD. So everything is trying to be calculated on one LOD at any distance you put your camera.
When close up you can have a decent sized resolution of 512 or so, depending on how much detail and resolution you want. Even if you change the resolution of the lightmap on your mesh with one LOD, that will not effect the resolution of the texture applied. What you can do is set up multiple LOD layers, where when you are at your greatest distance it will draw the highest Level of detail that you make. So at where you have the picture you linked previous, you can call a texture size of 2048.
At a certain point you will lose some resolution as software and distance fields do have limitations.
However, now that you have a little more information on how different methods of AA work and knowing that this effect is reproducible in and out out PIE mode without movement, there are some options to look into.
Thank you,