Crash when converting/allocating virtual textures

Hello, I’m having issues using Virtual Textures. In fact, I’m having so many that I’m starting to think I must be doing something wrong.

I was able to convert a few existing textures to VT using the “Convert to Virtual Texture” button, but whenever I try converting more than a couple hundred at a time, I always get a crash in `FVTUploadTileAllocator`. Looking into it further, it turns out to be caused by an index overflow—specifically, `FVTUploadTileAllocator::FHandle::StagingBufferIndex` going over 255. There is a throttling mechanism for uploads, but it’s explicitly ignored for uploads marked as “High” priority (which these are for some reason). Is that intentional? If so, why?

I worked around this crash by changing `FVTUploadTileAllocator::FHandle` to `uint64`. However, I then ran into an assert caused by `FVTUploadTileAllocator::NumAllocatedBytes` overflowing (it’s only 32 bits), since the allocator ended up with more than 4 GB in staging buffers alone. This doesn’t just happen during the initial conversion—it also occurs later when meshes are using these textures. That doesn’t seem like the intended behavior, but I can’t figure out what I’m doing wrong.

I’ve tried enabling/disabling `PoolSizeAutogrow` and setting appropriate buffer sizes based on the transient buffer sizes (it was only two texture formats, by the way). I also tried all the throttling CVars, but those were ignored due to the priority (as I mentioned before).

My only other thought was that the issue might be with the source textures. But then a colleague of mine ran into the exact same problem when generating HLODs. After generation, the only asset using the VT system was the landscape (it seems this happens automatically during HLOD generation), and he hit the same crash. When he used my hot-fix, the editor no longer crashed, but the system caused 100ms+ spikes roughly every second, with the spike size correlating to the pool size. These look to be caused by the upload memory (`r.VT.MaxUploadMemory`) not being high enough, but I also don’t understand why a number larger than 64MB is needed.

[Image Removed]

I should mention that we’re testing this on a fairly large 4x4 km map (that uses World Partition) with quite a few meshes. But I would assume this is exactly the kind of use case the system is meant to support.

Could you please help figure out what I’m doing wrong?

Update: We were finally able to fix the crash by increasing `r.VT.UploadMemoryPageSize` by a factor of 16 (to 64MB). But that did not fix the spikes. Also why is the default value so low?

Thanks

Steps to Reproduce
Easiest for me is to manually convert around 500 textures to VT at once.

Hi,

thanks for reaching out. The r.VT.UploadMemoryPageSize CVar controls the size in MB for a single page of virtual texture upload memory. If you have to increase this number by 16 to prevent crashes, it may be that the tile size is too large. Can you please share a screenshot of your virtual texture pool settings for fixed and transient pools (in Project Settings under Engine > Virtual Texture Pool)?

The 100+ms spikes may be caused by r.VT.MaxUploadMemory not being high enough (a low value can result in textures loading in more slowly causing texture pop-in, but the frame rate will be more stable, while a high value will cause textures to resolve more quickly but risks bigger performance hitches). To have a better idea of which methods are causing the hitch, would it be possible to profile with Unreal Insights and sharing a screenshot of the timeline zoomed in on one of these spikes?

It might also be helpful if you could make a screenshot with r.VT.Residency.Show enabled to diagnose these spikes. You may also try adjusting the following residency settings

r.VT.Residency.UpperBound - Virtual Texture pool residency above which we increase mip bias. (default 0.95)

r.VT.Residency.LowerBound - Virtual Texture pool residency below which we decrease mip bias. (default 0.5)

r.VT.Residency.AdjustmentRate - Rate at which we adjust mip bias due to Virtual Texture pool residency (default 0.2)

Thanks,

Sam

Thank you for your reply.

You were right—the issue was caused by the tile size being set too large (1024).

We didn’t notice because someone on our team had changed the default tile size CVar before we had validations in place, so we assumed the modified values were the defaults.

I’d like to apologize for wasting your time; this was entirely our fault.

Hi,

no problem at all, I’m glad to hear you found what the issue was.

I will close this ticket now, but feel free to open new cases in the future.

Best regards,

Sam