Virtual Texture Streaming - Any good resources out there?

Virtual Texture Streaming seems cool and the docs explain what it is well. I’m just confused on the “when” and “why”?. And there is not much information out there besides the doc. Is the idea that you can let nanite batch together different objects that share a single material with large textures? Except when looking at the Valley of Ancients project most of the geometry is using unique textures? What is the benefit here if there are different “nanite instances”… isn’t streaming mips going to do the same thing?

Ok gnna attempt to answer. I was way off… It seems like Virtual Texture Streaming is meant for just really large meshes and textures where you only see a small section of it or small area closer to the camera so it can only load that specific section at high res. But doesn’t this require that the UVs be laid out evenly on the texture sheet? If the UVs are distributed around on the sheet wouldn’t it just have to load all the tiles anyway?
I’m still curious though… could VTS be used as a nanite optimization to bring down the “nanite primitives”? Say you have 64 unique objects and textures at 1k that are used a lot in the world… and you batch them into a single 8k material using the correct tiles… and let VTS sort out which tiles to load into memory. Or is nanite already doing something similar behind the scenes?

Anything ‘virtual’ , virtual-textures, RVT, etc is ‘chunked’ or tiled so that the engine tries to work with what it only needs, getting at a subset of a texture or the like.

My understanding is that this type of tech is distinct from anything else, it only applies to the texturing step. Nanite uses triangle-clustering and some kind of pre-computed visibility to assist the rendering step in culling to just-about what you need to see vs the raster, layering process.

The VT’s would come into play when the engine actually draws whatever, nanite helps in determining what to draw.

What you are talking about, combining stuff into a single unit is essentially texture-atlasing. This works on it’s on level to provide benefits to resources and complexity, but doesn’t really do anything in terms of culling or displaying triangles. Yes, I would think that atlasing would still give you a benefit but not at the part of the engine you stated as atlasing works to optimize texture-paths not mesh-paths.

Yes thank you. I was just kind of confused about how VTS was being used in the Valley of Ancients demo. It seemed like it would have been more efficient to batch things together than have each instance have its own virtual texture. But i guess they just wanted the max 8k per instance. I’m confused about how efficient nanite renders different material instances (or nanite draw calls?).
In theory, couldn’t you combine all objects into a VTS material (maybe using the UDIM workflow in the docs) and then nanite will render a single material instance for everything, and VTS will decide how to load smaller sections of the giant texture into memory? I’m just brainstorming, i know such a material is impractical. But it would be useful to know how these things work together for making assets and such

Sounds about right, flow-wise. Unsure how much an 8k might be worth over a 4k, or at all.

As far as materials and nanite, the depth-render they run allows them to render all pixels of a single material in one-pass, there is no pixel-to-pixel switching. You DO still have to render the same amount of pixels but it’s in a batch now, which is (almost always) more performant. Nifty little side-effect of their ‘adjustments’ to the rendering process. Unsure if this property applies to nanite-meshes only or the entire scene.

Yes so the idea would be to try and get nanite to render the least amount of passes as possible.
4 unique objects with 4, 4k materials rendered with 4 nanite material instances could instead be 4 unique objects with 1, 8k virtual texture(s) material rendered with 1 nanite material instance. (while using the same total texture memory)
The Udim workflow described in the docs makes it seem like you wouldn’t even have to manually merge them into an 8k texture, you just tell the system to treat all the textures as one big udim virtual texture. I need to try it out. Seems super cool.
I still need to do some performance tests to see how nanite scales with many material instances to find out if its worth it or not. Also if anyone knows the best way to just compare the old texture streaming performance to VTS that would be helpful.

Nanite doesn’t care as much about material instancing as much as you’re worrying about. You also don’t want to use the old streaming system, you’ll never get around the headache of managing streaming as well as just using virtualized texturing.

You also don’t want to use 8k textures. The Coalition (the Gears studio) found it makes little difference over 4k for them, and they get reasonably close in with their camera. But the problem virtual texturing doesn’t solve is disc space; which is also the reason you want to use runtime virtual textures instead of baking every instance into something unique (streaming virtual texturing). The premier title for UE5, STALKER 2, is looking to come in around 200gb, that’s after compression, and very likely using 4k textures and a lot of virtual texturing. High rez textures can balloon your disc size incredibly quickly, even for smaller projects, it’s a good idea to think about trading off performance simply for the sake of keeping disc size down.

Agreed, I’m finding 4k can work just here and there, but multiple 2k textures (usually just 2) can do much better when mixed well. Even if you go for 3, it’s still “25% cheaper” in the larger (well, smaller) sense.


Guys I am new to UE5.

I am using Metascan objects of the highest quality and nanites. I believe these are using 8k textures.

I am already getting texture pool warnings. I have set the pool value to 6 GB of VRAM for now, but how do I reduce the texture sizes to 4 K for the objects in my scene?

edit: Thanks I got it, LOD bias in texture.