Mutable VRAM consumption and Mesh Streaming

I am seeking clarification regarding Mutable VRAM consumption in Unreal Engine, specifically related to Mesh LOD streaming for Customizable Objects using the Mutable framework.

Could you provide an overview of how Mutable affects VRAM usage during runtime and any best practices to optimize memory consumption? It seems according to rhi.DumpResourceMemory, that all LODs of all our runtime generated meshes are constantly in the VRAM, which quickly adds up to 800MB of only NPC body meshes. I wonder if that’s normal and whether Mesh Streaming would help with that. “Enable Mesh Streaming” is already turned on on the main Customizable Object, but it seems to have prerequisites such as the project setting “Mesh Streaming”, which I cannot enable (clicking the checkbox does not do anything). Would you recommend using Mesh Streaming in general, and if so could you advise on the necessary steps or conditions required to activate these options? Are there any known restrictions or configurations that might prevent enabling them?

Thank you for your guidance.

Best regards,

Matthias

[Attachment Removed]

Hey there,

Yes, this is normal. One thing about Mutable is that it is memory-intensive at runtime. We do recommend using mesh streaming (we have the sample project setup in this way now) and the project configuration is:

  1. Must be enabled on the engine: r.MeshStreaming 1 (This is the cvar in the project settings)
  2. State must allow it: bDisableMeshStreaming 0
  3. Has to be enabled in the CO: bDisableMeshStreaming 1
  4. Has to be enabled in Mutable: Mutable.StreamMeshLODsEnabled 1

Another way to reduce memory usage is to bake your instances (If you can) and load the ones you need from disk as needed.

Dustin

[Attachment Removed]

I’m curious if this has anything to do with it? That some choice wasn’t made properly based on some tag during UpdateCOIFromClothing?

[2026.02.12-15.21.55:510][566]LogScript: Warning: Attempted to access index 0 from array CallFunc_BreakGameplayTagContainer_GameplayTags of length 0!
	VillagerFancy_C /Game/Sunshine/Maps/MainWorld.MainWorld:PersistentLevel.VillagerFancy_C_2147410331
	Function /Game/Sunshine/Character/NPC/Villager/VillagerFancy.VillagerFancy_C:UpdateCOIFromClothing:02F4
[2026.02.12-15.21.55:510][566]LogScript: Warning: Script call stack:
	Function /Game/Sunshine/Character/NPC/Villager/VillagerFancy.VillagerFancy_C:COI_Ready2
	Function /Game/Sunshine/Character/NPC/Villager/VillagerFancy.VillagerFancy_C:ExecuteUbergraph_VillagerFancy
	Function /Game/Sunshine/Character/NPC/Villager/VillagerFancy.VillagerFancy_C:UpdateCOIFromClothing

could that be possibly loading a null asset into your graph in some way?

[Attachment Removed]

I managed to get around the crash for now, will maybe come back to it later (it seemed to be head and groom asset related, but it doesn’t really matter right now).

The main problem currently is: Mesh streaming is enabled in all the places (Global CVar, CO State, CO Root Node, Mutable CVar which seems to be default), but I can still see all LODs of all generated body meshes in rhi.DumpResourceMemory, even though all of them were on LOD3 far away when running the dump. At least that’s how I interpret the “Owner” column in the attached sheet which has the LOD numbers in the end.

Does it mean that the meshes are still not actually streaming, or is it intended to be like that?

[Attachment Removed]

This is possible. There are some cases where the engine keeps LODs in memory, but only for the one being displayed. Are any of the meshes set with NeedsCPUAccess, or is r.FreeSkeletalMeshBuffers=0?

Dustin

[Attachment Removed]

hello [mention removed]​, sorry for bringing this up again, but I ran into texture streaming problems, now. since mesh streaming is enabled, the texture streaming pool which was totally sufficient, previously, is now constantly overflowing. it seems to be related to mesh streaming. how is that possible, is mesh streaming sharing the pool with texture streaming?

[Image Removed]

[Attachment Removed]

How is that possible, is mesh streaming sharing the pool with texture streaming?

They do if r.Streaming.PoolSizeForMeshes is < 0.

TAutoConsoleVariable<int32> CVarStreamingPoolSize(
	TEXT("r.Streaming.PoolSize"),
	-1,
	TEXT("-1: Default texture pool size, otherwise the size in MB"),
	ECVF_Scalability | ECVF_ExcludeFromPreview);
 
static TAutoConsoleVariable<int32> CVarStreamingPoolSizeForMeshes(
	TEXT("r.Streaming.PoolSizeForMeshes"),
	-1,
	TEXT("< 0: Mesh and texture share the same pool, otherwise the size of pool dedicated to meshes."),
	ECVF_Scalability | ECVF_ExcludeFromPreview);

[Attachment Removed]

Can you share your latest ini settings and the rhi.DumpResourceMemory and a screenshot with stat Streaming on?

Dustin

[Attachment Removed]

It should be evicting things based on that cap, I wouldn’t say it’s a soft hint, but it is pretty firm. In your screenshot, those meshes are Raytrace meshes which should be a part of a separate streaming pool and have their own tunables by its own dedicated system in RayTracingGeometryManager.cpp, completely separate from the mesh/texture streaming pool. Here are the relevant CVars:

Pool Size & Residency

CVar: r.RayTracing.ResidentGeometryMemoryPoolSizeInMB

Default: 400

Description: Size of the RT geometry pool in MB. Unreferenced geometries stay resident up to this budget to avoid rebuild costs when re-requested.

────────────────────────────────────────

CVar: r.RayTracing.NumAlwaysResidentLODs

Default: 1

Description: Number of LODs per geometry group to keep resident even when not referenced by the TLAS.

────────────────────────────────────────

CVar: r.RayTracing.UseReferenceBasedResidency

Default: true

Description: Evict/keep geometries based on whether they’re referenced in the TLAS.

────────────────────────────────────────

CVar: r.RayTracing.ApproximateCompactionRatio

Default: 0.5

Description: Ratio used to estimate post-compaction size (temporary — will be replaced by actual tracking).

Streaming & Build Throttling

CVar: r.RayTracing.Streaming.MaxPendingRequests

Default: 128

Description: Max in-flight streaming requests for RT geometry data.

────────────────────────────────────────

CVar: r.RayTracing.OnDemandGeometryBuffersStreaming

Default: true

Description: Stream VB/IB buffers on-demand for dynamic geometry instead of keeping them in memory.

────────────────────────────────────────

CVar: r.RayTracing.Geometry.MaxBuiltPrimitivesPerFrame

Default: -1 (unlimited)

Description: BLAS build budget per frame in triangle count. When positive, builds are spread across frames.

────────────────────────────────────────

CVar: r.RayTracing.Geometry.PendingBuildPriorityBoostPerFrame

Default: 0.001

Description: Priority increment per frame for pending builds that weren’t scheduled.

Debug

CVar: r.RayTracing.Debug.GeometryMemoryPool.AlwaysResidentWarningPercentage

Default: 20

Description: Warns when always-resident geometry exceeds this percentage of the pool.

────────────────────────────────────────

CVar: r.RayTracing.DumpUnreferencedAlwaysResidentGeometries

Default: (command)

Description: Dumps unreferenced always-resident geometries to CSV for analysis.

The primary mechanism you want to look at is: r.RayTracing.ResidentGeometryMemoryPoolSizeInMB.

That said, I did see roughly 200 MB of skin cache data, which is all of the meshes in the scene, many of which are those mutable-made meshes just existing and being posed on the GPU. And I didn’t see any lod duplication in your data there. That system seems to be working correctly, but you would want to double check and make sure that anything that you expect to see there is there.

Based on the capture you did, it looks like you’re at roughly 12gigs of VRAM usage, 6 resident/6 non resident. Is the primary concern that you’re dealing with that you’re hitting that cap?

Dustin

[Attachment Removed]

There’s kind of two options you have:

  • Limit your raytrace culling distance with r.RayTracing.Culling.Radius
  • Mark objects to not be raytraced.

In the dump you did, there were a ton were resident and in the TopAcceleratedStructure (RTAS), including the new hair objects. As an example:

SM_FluxPlane512x512Buffer | 33.4777832 | Resident | RTAS/Game/FluidFlux/Surface/Meshes/SM_FluxPlane512x512.SM_FluxPlane512x512 [LOD0]
 
SM_FluxPlane512x512Buffer | 33.4777832 | Resident | RTAS/Game/FluidFlux/Surface/Meshes/SM_FluxPlane512x512.SM_FluxPlane512x512 [LOD0]
 
SM_FluxPlane512x512Buffer | 33.4777832 | Resident | RTAS/Game/FluidFlux/Surface/Meshes/SM_FluxPlane512x512.SM_FluxPlane512x512 [LOD0]

there are 3 entries for this flux plane that are almost 100mb in memory by themselves.

If our mesh streaming pool is set to 80 MB, why would the skin cache be that large

The issue here is that these are just animated objects in the skin cache, and philosophically, you still want them rendered, so you need them in there even at the lowest LOD.

Based on the dump you said last time, I would look to other areas besides characters for reducing your vram usage.

Dustin

[Attachment Removed]

Hello Dustin, thank you for explaining!

I enabled it in all these places, but now I get a crash when approaching one of the characters in the packaged build (development). do you maybe know what it could be about?

[Attachment Removed]

since mips are mentioned, could it be related to us using virtual textures on the metahumans?

[Image Removed]

[Attachment Removed]

[mention removed]​ if you could just let me know if I’m on the wrong track here, would be appreciated. thanks!

[Attachment Removed]

Hello Dustin,

thank you, setting r.Streaming.PoolSizeForMeshes to a value > 0 (e.g. 512) did resolve the texture streaming pool overflow — the mesh streaming budget is now separated from the texture pool and textures are no longer under pressure.

However, the pool size for meshes itself doesn’t seem to be enforced. Whether I set r.Streaming.PoolSizeForMeshes to 48 or 512 (via -dpcvars, DefaultScalability.ini, or runtime console command), the total VRAM for NPC body meshes reported by rhi.DumpResourceMemory stays at roughly 200MB. I would expect a 48MB pool to force aggressive LOD eviction, but it doesn’t appear to constrain anything.

Is PoolSizeForMeshes effectively a soft hint rather than a hard constraint? If so, is there a recommended way to actually cap mesh streaming VRAM for Mutable-generated skeletal meshes?

Best regards,

Matthias

[Attachment Removed]

Here is the DefaultScalability.ini, dump and screenshot. View distance quality was set to medium, so it should be 80MB.

When filtering for SK_CO_BodyVillagerMeadow, the sum of all sizes returns 288.64MB, but I’m not sure if I’m reading the dump correctly.

[Image Removed]

[Attachment Removed]

[Attachment Removed]

Hi Dustin,

Thanks for the detailed breakdown — the distinction between the RT geometry pool and the regular mesh/texture streaming pool is really helpful. We weren’t aware those were managed separately.

To answer your question: yes, we are hitting actual VRAM pressure. Our target spec includes RTX 3070 (8 GB), and we’re running out of video memory on those cards. We’ve already taken a number of steps to reduce our footprint:

- Enabled nanite for all static meshes

- Converted all textures to VT where possible

- Reduced the texture streaming pool size

- Reduced the Virtual Texture pool sizes

- Reduced the mesh streaming pool (set to 80 MB via view distance quality on medium)

- Removed all hair strand geometry (cards only)

The RT geometry pool is still at the default 400 MB. I did try reducing it, but I’m not sure if the change is detectable in a new dump, or how to filter the resulting CSV to isolate the RT geometry pool entries specifically. Could you point me to which resource flags or types in the dump correspond to that pool? I tried filtering for RTAS, but the resulting size stayed around 400MB even with a pool size of 16 (despite the warning about exceeding the RT pool in game)

Also, one thing that confused me: you mentioned roughly 200 MB of skin cache data for the posed meshes on the GPU. If our mesh streaming pool is set to 80 MB, why would the skin cache be that large?

Given our 8 GB constraint, if you have any suggestions for where we might be able to reclaim large chunks of VRAM beyond what we’ve already done, or noticed anything peculiar in the dump, we’d really appreciate the guidance.

Thanks,

Matthias

[Attachment Removed]