virtual textures performance for deferred rendering?

Using Unreal for CG animation work, where we use a lot of UDIM textures, so mainly concerned with deferred rendering. In Unreal virtual texture streaming is necessary for UDIM textures to work. Is there any disadvantage for using virtual textures for everything, as opposed to using VTs where needed (for UDIMs) and otherwise using regular textures? That is, are virtual textures less performant than regular textures, in the context of deferred rendering?

It’s more expensive to sample a virtual texture, I’ve never actually delved into the impact of this on a full scene but you can get some the statistics on its cost in the editor. In general, I don’t think it would be much of an issue, but you do have the option of only using it on large textures (or UDIM’d meshes in your case)

The most significant disadvantage of using SVTs is that it can result in visible popping as the higher resolution tiles are streamed in, as the engine doesn’t know what tiles it needs until they’ve already been required to render a frame. I am not sure if this behavior is different for the Sequencer/MRQ, it may be able to do some warming up for linear content so that this isn’t a problem.

Yes, in the anti-aliasing settings for the MRQ there’s an option to render warmup frames. Also in the Game Overrides settings for MRQ, there is the option to set texture streaming to “fully load used textures”.

1 Like