Virtual Texturing Feedback

Hey… currently it is impossible to even import a 16k x 8k texture, it will hard crash with a failure to allocate memory. The amount of memory it is trying to allocate is huge (in the GBs), so there seems to another issue not related to just the size in memory for the textures. This is from the 4.23 branch on github. But generally marking texture as “Stream Virtual Texture” works.

Tommy.

EDIT: UDIM based import works, both in terms of single (initial file import) and as part of an import of a model… Cool!!!

What determines what VT asset a given texture is stored in? This is important when breaking up assets for streaming.

I might be wrong, but I think the textures are streamed from their individual assets into a VT in memory as needed, not stored into the VT assets. The whole point of VT is to not have everything loaded at once and break textures into individually streamable blocks, so you can have only a portion of it loaded, or different parts loaded with different mip map levels.

What would be terrain workflow with runtime textures ? I don’t remember seeing anything that would allow texturing large landscape, for which resolution of a single runtime texture is insufficient.

This is basically how it works. It’s all stored… I dunno how it’s done in UE4, but other than hopefully with good compression it’s not totally relevant to a lot of concerns.

Since there seems to be confusion as to how VT works and what it does here’s the best tl;dr I can do: All textures are broken down into little blocks. All non translucent objects then reference these blocks. Then the VT system figures out which blocks are needed to texture the current, onscreen content, and loads that content from disc into ram. Only the current, onscreen texture blocks needed are ever loaded, so regardless of what you have onscreen textures only ever take up a fixed, small amount of ram.

Now there’s a bunch of details glossed over, and a bunch of stuff I don’t know about how UE4 does it. Does UE4 support translucent VT? How are these texture blocks compressed? You can keep extremely low res mips of textures in memory at all times to avoid texture missing errors if the streaming doesn’t get load the block in time at the cost of more ram, is this what UE4 doe? You might also want the former for say, raytracing purposes because that needs offscreen textures. Etc. But the basic overview is above. And the basic overview is, streaming is just handled for you. Unlimited streaming, of unlimited texture variety regardless of what asset type it is and limited only by final game package size. Unlimited texture detail is also handled, limited by bandwidth bottlenecks (say, HDD to ram speed) as well as, again, final game package size.

Imagine your landscape uses a single texture of gargantuan dimensions where each texel is unique, but instead of storing the monster into disk it’s dynamically generated on demand by running your blending material and caching the results into the virtual texture blocks around the camera position. There are special material nodes to support that workflow too and baking decals into the VT (and yes, this means the fabled spline decals become a reality).

No, I mean I am not asking how procedural virtual texturing works, but how Procedural Runtime Textures(which is the name for PVT in Unreal) are meant to be used to texture landscape, that needs texture size beyond 500k~ x 500k~ texels, for it is maximum size of PRT.

I assume these would be authored by your landscape creation tool. I’ve wondered the same question since I first read about megatextures in Rage and later with that one Granite demo with the glider.

That’s exactly the point: there is a “500kx500k” texture, but it’s *virtual, *so it doesn’t really exist. The parts close to the camera are proceduraly generated by blending the tiling landscape layers as needed. Geometry further away gets a lower res mip map generated, and thus requires less memory. As the camera moves closer, the higher mips are generated. This is all placed on a fixed memory pool.

Of course, you could also pre-generate a gigantic texture as UDIM files, Rage-style, but in modern hardware it’s usually a better trade off to dynamically generate by blending layers and decals asynchronously instead of constantly streaming from disc. In Rage the entire virtual texture streaming tech took every ounce of power in the PS3 and X360 consoles, so they had no choice but to prerender everything at the texel-level (the lighting is baked directly into the textures, which is why there’s almost no specular reflections in Rage). Doom and Wolfenstein don’t do that anymore. Farcry 4 and 5 also use run-time generated VT on their terrains, with great results.

I don’t think you’ve got the point of the question. It is related to specific implementation and tool set in UE4 rather than how procedural virtual texturing works or generic theory or how it is done elsewhere.

Simply put, You are given:

WorldComposition world, that consists of 16x16 505 landscape tiles at 1 vertex per meter.
Runtime Virtual Texture actor, that defines bounds of what is getting rendered into runtime virtual texture and its coordinates.
Runtime Virtual Texture asset.
Landscape material, with two networks, one for rendering into specific runtime virtual texture and the other one that samples from the same runtime virtual texture.

Do tell me, how to combine these into properly textured landscape.

In case the pattern of what I am hinting at is still unclear, if you place runtime virtual texture actor with maximum possible virtual texture size to cover all 16x16 tiles of your world composition, you get stunning 60 texels per meter of resolution.

There’s nothing under the hood that would automatically create materials and runtime virtual texture bounds on a per component basis on landscape?

So how does the VT Volume Actor work? Does it change all the materials within it’s bounds to instead use a runtime virtual texture?

Also, will support for 16k+ texture import be included in the release version of 4.23?

Nope. The closest you can get now is plague-duplicate your landscape material times number of landscapes, and use times number of landscapes runtime virtual textures. Which is… pretty unmaintainable.

> Hey… currently it is impossible to even import a 16k x 8k texture, it will hard crash with a failure to allocate memory
I’ve been able to import textures of that size, but it does indeed require a huge amount of memory. In the future we hope to improve texture import code to better handle large images.

> Does UE4 support translucent VT?
Yeah, VTs should work on translucent objects. Translucent objects write to the VT feedback buffer stochastically (similar to how opaque objects handle materials with multiple VT samples).

> How are these texture blocks compressed?
VTs are split into blocks of 128x128 by default (136x136 after border is added). These blocks are compressed with DXT as usual. Each block can then by optionally compressed with zlib or Crunch (Crunch is currently enabled by default, controlled by cvars). Other types of compression may be added in the future.

> You can keep extremely low res mips of textures in memory at all times to avoid texture missing errors
Yes, this is the approach UE4 is currently taking. All VTs keep a single tile resident at all times, so you shouldn’t see any missing textures (just potentially very blurry textures).

> you might also want the former for say, raytracing purposes because that needs offscreen textures
VT doesn’t work with ray-tracing for this release. We may add this later, but it will likely require a different feedback mechanism.

Thanks for the information!

Do you also have some info on VT for landscapes with open worlds?
I’ve created a thread here if you’d like to check it: Landscape Virtual Texturing (Since UE 4.23) - Rendering - Unreal Engine Forums

> Do you also have some info on VT for landscapes with open worlds?
Sure. In general switching textures to VT will save memory, but increase the cost of materials. However since VT enables much higher texture resolutions to be used, materials can often be simplified by flattening things that used to be multiple texture layers down into a single VT (in the future we may add tools to automate this in some way). So this can end up as a net performance win as well, although this will be case by case.

In addition to streaming VT data from disk, there’s also a system in place to generate VT data on the fly (this is the runtime virtual texture stuff). I didn’t work on this side of things, so I don’t have as much information here, but we should have documentation ready for the 4.23 release. I believe this system is designed with terrains in mind (although it’s not limited to that), but automatically compositing multiple decal layers into a single runtime generated virtual texture.

I’ve tested and the UDIM’s work great for small megatextures (i.e. 32k x 32k). After that size however you run into issues within Texture.cpp where int32’s are used for memory offsets/size for the texture & mipmaps instead of int64 causing overflow issues. UDIM is a flawed format in that you can only have a horizontal width of 10 textures/tiles, so future support for a “whatever_X021_Y001.tga” syntax would be ideal. The current 10 tile horizontal limit coupled with the 8k image import limit (again due to int32 code, this time within UE’s array class coupled with the way many of the built in image format importers allocate their arrays [i.e. using 4 indexes for 1 RGBA, instead of 1 vector class index]) means the max VT tex width is 81920 pixels :frowning: Which is rough when you’re trying to import satellite image data of New Zealand that’s 120000x160000 pixels.

Edit 1: After scaling down to 81920 width and modifying UE4’s texture class to use int64 instead, it crashes when trying to compress w/ crunch on “CrunchCompression.dll!crnlib::crn_comp::quantize_images” which based on the github repo looks like it also uses a lot of int32 logic… going to attempting to run w/o crunch rather than dig into that rabbit hole.

There goes my dream of using VT for massive planetary textures…

You can still remap the UVs or split between multiple VTs, but the bigger problem I’m running into is how the textures are managed in ram while being imported. I’m doing a 81920x122880 megatexture right now and it has taken ~24 hours to import 36 of 150 8192x8192 tiles and has had peak UE4 ram usage of 200GB. I’m having to use an NVMe for RAM overflow/pagefiling. Only 76 hours to go! So far no crashes after the conversion to int64, but I’ll have to schedule to re-attach the debugger when it’s almost done as I wouldn’t doubt there’s some other 32bit int issue I haven’t caught yet.

While not directly comparable, I imported/rescaled/exported this megatexture in the span of ~1.5hrs in GIMP (peak ram usage I believe was ~120gb) vs UE4 will take about 100hrs to do the extra VT processing/conversion.

Edit 1:
:eek: Just a cozy 213GB of allocated memory; tile 62 out of 150

Edit 2:
I hate Microsoft so much. On tile 140 out of 150, roughly 90hrs in, Microsoft decided to reboot my computer via “UpdateOrchestrator”. No words can describe how much I hate Microsoft and their garbage updates. Time to use the sledgehammer (Download Sledgehammer - MajorGeeks) on Windows Update

Welp, that is really no option for me :frowning: I guess I have to manually cut the texture into countless small parts and somehow correctly project them onto a sphere.

Hmm what is even the point of having a VT feature if we can’t import textures over 8k without the engine making problems?