DX12 VRAM Budget

I was wondering if you could help shed some light on a line of code we’ve come across when doing a deep dive into memory usage on PC. It appears that DX12 intentionally reserves only 90% of the vram budget from the DXGI_QUERY query…

\Engine\Source\Runtime\D3D12RHI\Private\Windows\WindowsD3D12Device.cpp, L1648 in 5.6

const int64 TargetBudget = LocalVideoMemoryInfo.Budget * 0.90f; // Target using 90% of our budget to account for some fragmentation. FD3D12GlobalStats::GTotalGraphicsMemory = TargetBudget;compared with the DX11 implementation, which utilises 100% of the budget…

\Engine\Source\Runtime\Windows\D3D11RHI\Private\Windows\WindowsD3D11Device.cpp, L1249 in 5.6

// use the entire budget for D3D11, in keeping with setting GTotalGraphicsMemory to all of AdapterDesc.DedicatedVideoMemory // in the other method directly below FD3D11GlobalStats::GTotalGraphicsMemory = LocalVideoMemoryInfo.Budget;

We’ve traced this change back as far as it’s possible to go on Epic’s perforce and it harkens back to the Gear of War 4 days circa 2016, so affects most UE4/UE5 versions.

Back then GoW4’s required card for DX12 was a GTX 970 which had 4GB of VRAM, so it was leaving some 400MB on the table for ‘fragmentation’

Compared with modern (at the time of this post) GPUs like the RTX 5090 which can have as much as 32GB, so it’s leaving 3.2GB behind here.

We’re wondering exactly what the fragmentation here refers to?

…and if it’s still an issue with todays GPUs?

…and if it is, should it be a fixed amount rather than a % of the budget?

…and why doesn’t DX11 have to concern itself with ‘fragmentation’?

What are the consequences of using 100% of the budget the DXGI_QUERY recommends targeting? AFAIK The GPU could always end up exceeding the budget due to PC multitasking other processes which also require VRAM; so it will have to ‘shuffle some things around’ in memory if this happens which could result in hitches, but that’s always the case whether it’s set at 90% or 100%

and if it needs to leave some headroom here, can this be converted to a cvar so developers can adjust it more easily as they see fit?

Thanks

Andrew

Steps to Reproduce

Run anything that uses DX12, breakpoint the lines mentioned above

Hi,

Yeah, that hardcoded 0.9 value came from Microsoft 9 years ago, and I guess nobody challenged it until now. It affects the PoolSizeVRAMPercentage value from DefaultEngine.ini, essentially being multiplied into it, so the default value of 70 makes the engine use 63% of the available VRAM for the streaming pool (unless you use D3D12.AdjustTexturePoolSizeBasedOnBudget, in which case it’s overwritten almost immediately by the code in FD3D12DynamicRHI::RHIGetTextureMemoryStats). We need to clean this up, I’ve made a ticket for it. In the mean time I think it’s safe to remove, VRAM is virtualized and the budgets are approximate so it’s probably safe to go up to the limit, or get very close to it.