Hello, I’m having issues using Virtual Textures. In fact, I’m having so many that I’m starting to think I must be doing something wrong.
I was able to convert a few existing textures to VT using the “Convert to Virtual Texture” button, but whenever I try converting more than a couple hundred at a time, I always get a crash in `FVTUploadTileAllocator`. Looking into it further, it turns out to be caused by an index overflow—specifically, `FVTUploadTileAllocator::FHandle::StagingBufferIndex` going over 255. There is a throttling mechanism for uploads, but it’s explicitly ignored for uploads marked as “High” priority (which these are for some reason). Is that intentional? If so, why?
I worked around this crash by changing `FVTUploadTileAllocator::FHandle` to `uint64`. However, I then ran into an assert caused by `FVTUploadTileAllocator::NumAllocatedBytes` overflowing (it’s only 32 bits), since the allocator ended up with more than 4 GB in staging buffers alone. This doesn’t just happen during the initial conversion—it also occurs later when meshes are using these textures. That doesn’t seem like the intended behavior, but I can’t figure out what I’m doing wrong.
I’ve tried enabling/disabling `PoolSizeAutogrow` and setting appropriate buffer sizes based on the transient buffer sizes (it was only two texture formats, by the way). I also tried all the throttling CVars, but those were ignored due to the priority (as I mentioned before).
My only other thought was that the issue might be with the source textures. But then a colleague of mine ran into the exact same problem when generating HLODs. After generation, the only asset using the VT system was the landscape (it seems this happens automatically during HLOD generation), and he hit the same crash. When he used my hot-fix, the editor no longer crashed, but the system caused 100ms+ spikes roughly every second, with the spike size correlating to the pool size. These look to be caused by the upload memory (`r.VT.MaxUploadMemory`) not being high enough, but I also don’t understand why a number larger than 64MB is needed.
[Image Removed]
I should mention that we’re testing this on a fairly large 4x4 km map (that uses World Partition) with quite a few meshes. But I would assume this is exactly the kind of use case the system is meant to support.
Could you please help figure out what I’m doing wrong?
Update: We were finally able to fix the crash by increasing `r.VT.UploadMemoryPageSize` by a factor of 16 (to 64MB). But that did not fix the spikes. Also why is the default value so low?
Thanks