Reducing VRAM usage from high-resolution UI thumbnails

Hello,

We have a question regarding UI texture memory usage and LOD handling in Unreal Engine.

In our project, we use building and item thumbnails in the UI that are authored at 1024×1024. This resolution can be useful on 4K displays, but on lower-resolution screens (e.g. Full HD), loading these textures at full resolution consumes a significant amount of VRAM unnecessarily.

Ideally, we would like Unreal Engine to automatically load lower mip levels of these UI textures depending on the actual screen resolution or DPI scaling, so that users on lower-resolution displays do not pay the memory cost of full-size textures.

So far, the only solution we’ve found is to manually adjust r.UITextureLODBias.

  • before loading the images. However, this approach feels problematic because:It affects all UI textures globally, not just our thumbnails/icons.
  • It requires manual management rather than being resolution- or DPI-aware.
  • It feels like something that should be handled automatically by the engine.

Our questions are:

  1. Is there a built-in or recommended way to have UI textures automatically select appropriate mip levels based on screen resolution or DPI scaling?
  2. Are there asset-level or Slate/UMG-specific settings that allow more granular control over UI texture LODs?
  3. Is r.UITextureLODBias the intended approach for this use case, or is there a better pattern we should be using?

Any guidance or best practices for managing high-resolution UI thumbnails without excessive VRAM usage would be greatly appreciated.

Best regards,

Matthias

[Attachment Removed]

Steps to Reproduce[Attachment Removed]

Hi,

I’ll link the discussion on [this [Content removed] here as well since they’re related, and I’ll start with your questions:

Is there a built-in or recommended way to have UI textures automatically select appropriate mip levels based on screen resolution or DPI scaling?

We do use the closest mip based on the size of the texture on-screen, but I believe they’ll all be loaded into memory regardless. We don’t typically use mips for UI textures outside of some special cases where the same texture is displayed at different sizes, where it’s used to improve texture quality at the cost of additional memory overhead.

Are there asset-level or Slate/UMG-specific settings that allow more granular control over UI texture LODs?

You can set an LOD bias on individual textures, but the typical use-case for that is when you want to keep some high resolution source assets in your project and not actually ship those higher resolution mips. We *do* make use of the Downscale option on our UI textures to scale down larger textures on lower-end platforms. We also use the Data Validation plugin to flag large UI textures and encourage designers to apply per-platform downscaling on those textures.

Is r.UITextureLODBias the intended approach for this use case, or is there a better pattern we should be using?

It’s an option, though I think that would be best used in cases where you want to strip some of the higher resolution mips at cook time. If your intention is to support multiple device profiles on PC (i.e. 4k vs HD), the most efficient strategy is often to just ship two versions of the textures (with the option to include the 4k textures in a separate content pack). There are a few approaches to this; materials with a static switch to choose the appropriate texture, blueprint logic to choose the appropriate texture in PreConstruct, etc. It can be a bit of a pain to implement that project-wide, but in this case It seems you could implement it specifically for these inventory icons and see quite the improvement without sacrificing image quality for 4k screens.

Best,

Cody

[Attachment Removed]

Hi,

I’d expect the virtual texture approach to save VRAM, as that’s the primary upside of virtual textures in general, though the exact savings will be tough to quantify without doing your own testing (especially since we don’t have a lot of experience on our end using VTs for UI). My expectation would be that it would stream in just the mip you need based on the size the icon is on-screen, so if that’s lower than the full texture size then you should see those savings.

Texture quality will affect VRAM usage as well, this page shares some details on how. I’d also recommend reviewing the compression settings on your textures to ensure there aren’t some savings to be gained there.

Best,

Cody

[Attachment Removed]

We also tried setting MaxTextureLOD or MipBias on textures, but that doesn’t work in a packaged build.

Changing Device Profile at runtime also came to mind, but not sure if that is possible and when you would run that logic (as it probably requires restart?)

We really don’t like the prospect of having mutliple versions of all textures for different resolutions. Is it really the most common approach?

[Attachment Removed]

Hi,

Multiple versions is a last resort, our approach in Fortnite is one asset at the highest res we need (potentially downscaled for some platforms on a per-texture basis) and mips for some specific textures that appear at different sizes in-game to prevent aliasing when they’re downscaled. Your requirements are a bit different since you have very high-res art for your inventory icons, so that complicates things a bit.

You might be able to get some metrics on memory usage using Stat VirtualTexturing and Stat VirtualTextureMemory, doing an A/B test with and without virtual textures in your UI might be a good starting point to measure the impact.

[Attachment Removed]

Hi Cody,

Thanks a lot for the detailed explanation.

One follow-up question we had after reading your response: would Virtual Textures actually help in this specific UI thumbnail use case? I am aware of the VT ticket, but currently we are unsure if it would even solve our problem. More specifically, would VT allow these 1024×1024 UI textures to only load the texture data actually needed for their on-screen size (e.g. on a 1080p display), thereby reducing VRAM usage automatically? Or would the memory impact effectively be similar, meaning we would still need to author and/or ship lower-resolution versions of the textures for lower-resolution monitors to see meaningful savings?

We’re trying to better understand whether VT is a viable solution here, or if the separate-texture / per-platform-downscale approach is still the recommended path even when VT is available.

Another question that might be related: How does the “Texture Quality” scalability setting come into play here? Does lowering that setting reduce texture resolution and VRAM impact?

Thanks again for the guidance!

Best regards,

Matthias

[Attachment Removed]

hello, thanks for your reply.

the exact savings will be tough to quantify without doing your own testing

how would you suggest testing/measuring it? I tried RHI.ResourceMemoryDump but the textures are not showing up at all any more, since they are virtual.

we don’t have a lot of experience on our end using VTs for UI

how do you handnle it, for example in Fortnite? are there different sizes for icons depending on your screen resolution, or is the highest resolution icon always loaded no matter your screen resolution?

[Attachment Removed]