Virtual textures - why not always use them?

Can anyone explain the disadvantages of virtual textures and why you wouldn’t always use them?
I have a high poly interior environment for a desktop application using just 4 8k texture atlases.
Should streaming virtual be enabled for these textures?
Using 4.27.2

1 Like

RVT comes into it’s own with landscape. With landscape, the engine spends a lot of time deciding which part of which texture to show the player, because in a sense, it’s blended dynamically.

So it’s much more efficient to bake the whole thing into a big render target. Then, the game is just using memory instead of processor power.

If you have a texture placed on a mesh, why bother copying that to a render target to display it? You already have the mips system which works great.

Disagree. There seems to be much more upside than downside to just using virtual textures for everything.

According to the official documentation:

Traditional mip-based texture streaming performs offline analysis of material UV usage and then at runtime decides which mip levels of a texture to load based on object visibility and distance. This process can be limiting because streaming data considered is the full texture mip levels.

When using high-resolution textures, loading a higher mip level of a texture can potentially have significant performance and memory overhead. Also, the CPU makes mip-based texture streaming decisions using CPU-based object visibility and culling.

Visibility is more conservative—meaning your system is more likely than not to load something—to avoid objects popping into view. So, if even a small part of the object is visible, the entire object is considered visible. The object loaded including any associated textures that may be required to stream in.

In contrast, the virtual texturing system only streams in parts of the textures that UE requires for it be visible. It does this by splitting all mip levels into tiles of a small, fixed size. The GPU determines which of the visible tiles are accessed by all visible pixels on the screen. This means that when UE considers an object to be visible, it’s communicated to the GPU which loads the required tiles into a GPU memory cache. No matter the size of the texture, the fixed tile size of the SVTs only considers the ones that are visible. Tile GPU computes visibility using standard depth buffers causing SVT requests to only happen for visible parts that affect pixels.

So from my understanding, with current-gen games that usually use a bunch of megascans assets with 4k textures, this is a huge win. The downside of virtual textures is that they take more shader calculations, but those extra calculations are imo a negligible loss in comparison. Wouldn’t you agree?

This is an interesting topic.
Judging from the Virtual Texture streaming description (remember to not mistake it with Runtime Virtual Textures), it seems like we should always use them, for their superior streaming method.

But we would need a real comparison test, e.g. a scene that uses many textures as Virtual Textures and an identical scene but with regular textures. Then compare ms, memory, and so on…

I’ve found more fresh info on the matter, which somehow confirms our theory here: