What Types of Anti-Aliasing Does Unreal Engine 4 Support?

I want to have as many different settings as possible for my game so I was wondering what all of the methods of AA UE4 supports. I know it supports TXAA by default, but I’m just wondering as there are dozens of different types of AA. If UE4 does not natively have one type of AA, how do I add it in?

It supports Epic’s own Temporal AA (Not TXAA), FXAA, and Supersampling by default. The latter two you’re probably familiar with, for the Temporal AA you can read more about it here: https://de45xmedrsdbp.cloudfront.net/Resources/files/TemporalAA_small-59732822.pdf

If you want to add a custom solution, you can download the source and make changes there, but most normal antialiasing techniques don’t work well with deferred rendering (at least not with the same performance as Temporal AA). It might be difficult to get an existing method like MSAA work on the full scene without tanking the framerate.

Yeah, MSAA is generally not very good for a Deferred Render solution.

Worth noting that there are several options for the ‘Speed’ of the TAA as well in UE4, though I can’t for the life of me remember where you set them now…

Are there any viable alternatives to TXAA? For higher end/weaker machines that can/can’t handle it (disregarding the approximated FXAA)?

Could you explain why? I’m using UE4 for cinematics, and don’t really care about performance, only the final quality. I’m currently rendering at huge resolutions and downsampling, which seems like a fairly daft solution to me! I’m interested in extending the code, but not sure of the best approach to take… things like hair are a real problem at the moment.

Temporal AA just doesn’t work anyway; it causes jitter in things like the depth buffer.

Using MSAA and downsampling is definitely the best way to go if you’re rendering out to video.

I’d stick with that, Ambershee is right in terms of the issues with TAA. For video rendering do what you’re doing now.

The reason for MSAA not always working so well in a Deferred Renderer isn’t so much because of the technique, it’s because the GBuffer that comes out from a Deferred Renderer is usually SO badly aliased that it’s beyond the help of multisampling, unless you use a high multiplier. The Quality vs. performance at that point just makes the Deferred solution almost useless.

Also, I can’t remember if the aliasing works before or after the GBuffer has been rendered but, there is only a certain size you can go to when rendering the GBuffer (I think the limit for Unreal is 16K). You can render it at a higher resolution, but there is a cap in DX / OGL that you’ll hit eventually anyway, but the main issue is that each step up costs you 4X the rendering time (ish), so MSAA has an upper limit anyway on what multiplier you can use, which probably still isn’t good enough. At least, that’s how I think it works.

I think it was Joe Graf that did a presentation on why they settled with TAA and didn’t use MSAA… I’ll try and dig it out.

Thank you!

The more I can learn about this the better. I have been considering Unity and CryEngine as alternatives, mainly for antialiasing problems, but it’s not clear if I’d fare any better than with UE4… currently I render at anything from 4K up to nearly 16K, in order to produce 1080 output. I’m fairly nervous of what happens when we have to produce 4K final output though, especially if there are hard limits on GBuffer size.

Unity would not be the way to go if AA is important too you.

Are you going to produce 4k for Youtube/online streaming? If so, compression is still going to affect the video way more than “only” down sampling from 8k-16k. Even 1080p video up-scaled to 4k looks better and passable compared to Youtube’s 1080p bitrate/compression. Also the higher the resolution, the less you’ll see aliasing and artifacts, so you probably wont need something like 32k to make it look as clean as 1080p down sampled from 16k.