Hey,
We noticed that texture and mesh streaming can end up in an unstable state (constantly streaming texture/meshes in and out) when the the wanted mips are just a bit overbudget compared to what fits within r.Streaming.PoolSize.
We use both texture and mesh streaming, and use the stock Unreal CVars to configure streaming behavior:
r.Streaming.UsePerTextureBias=1
r.Streaming.MipBias=1 ; coming from Medium Texture Quality
I am able to reproduce the issue in a cooked client by adjusting `r.Streaming.PoolSize` to just about match the streaming load wanted by the content I’m looking at. Turning either r.Streaming.MipBias or r.Streaming.UsePerTextureBias to 0 fixes the issue.
It seems the following 2 pieces of logic in `FRenderAssetStreamingMipCalcTask::UpdateBudgetedMips_Async` are directly fighting against each other, flipping back and forth. I added the UE_LOG lines locally to investigate the issue:
if (PerfectWantedMipsBudgetResetThresold - MemoryBudgeted - MeshMemoryBudgeted > TempMemoryBudget + MemoryMargin)
{
UE_LOG(LogContentStreaming, Log, TEXT("Reset BudgetMipBias due to PerfectWantedMipsBudgetResetThresold: PerfectWantedMipsBudgetResetThresold: %.2f, MemoryBudgeted: %.2f, MeshMemoryBudgeted: %.2f, TempMemoryBudget: %.2f, MemoryMargin: %.2f"),
PerfectWantedMipsBudgetResetThresold / 1024.f / 1024.f, MemoryBudgeted / 1024.f / 1024.f, MeshMemoryBudgeted / 1024.f / 1024.f, TempMemoryBudget / 1024.f / 1024.f, MemoryMargin / 1024.f / 1024.f);
// Reset the budget tradeoffs if the required pool size shrinked significantly.
PerfectWantedMipsBudgetResetThresold = MemoryBudgeted;
bResetMipBias = true;
}
else if (MemoryBudgeted + MeshMemoryBudgeted > PerfectWantedMipsBudgetResetThresold)
{
// Keep increasing the threshold since higher requirements incurs bigger tradeoffs.
PerfectWantedMipsBudgetResetThresold = MemoryBudgeted + MeshMemoryBudgeted;
}
...
if (Settings.bUsePerTextureBias && AllowPerRenderAssetMipBiasChanges())
{
//*************************************
// Drop Max Resolution until in budget.
//*************************************
UE_LOG(LogContentStreaming, Log, TEXT("Drop Max Resolution until in budget: MemoryBudgeted: %.2f, MemoryBudget: %.2f"), MemoryBudgeted / 1024.f / 1024.f, MemoryBudget / 1024.f / 1024.f);
TryDropMaxResolutions(PrioritizedRenderAssets, MemoryBudgeted, MemoryBudget);
if (bUseSeparatePoolForMeshes)
{
TryDropMaxResolutions(PrioritizedMeshes, MeshMemoryBudgeted, MeshMemoryBudget);
}
}
I see the following output spammed in the log:
[817]LogContentStreaming: Reset BudgetMipBias due to PerfectWantedMipsBudgetResetThresold: PerfectWantedMipsBudgetResetThresold: 477.80, MemoryBudgeted: 250.69, MeshMemoryBudgeted: 126.69, TempMemoryBudget: 35.00, MemoryMargin: 5.00
[817]LogContentStreaming: Drop Max Resolution until in budget: MemoryBudgeted: 377.38, MemoryBudget: 340.49
[824]LogContentStreaming: Drop Max Resolution until in budget: MemoryBudgeted: 485.85, MemoryBudget: 340.49
[831]LogContentStreaming: Reset BudgetMipBias due to PerfectWantedMipsBudgetResetThresold: PerfectWantedMipsBudgetResetThresold: 485.85, MemoryBudgeted: 252.12, MeshMemoryBudgeted: 131.45, TempMemoryBudget: 35.00, MemoryMargin: 5.00
[831]LogContentStreaming: Drop Max Resolution until in budget: MemoryBudgeted: 383.58, MemoryBudget: 340.49
[838]LogContentStreaming: Drop Max Resolution until in budget: MemoryBudgeted: 477.80, MemoryBudget: 340.49
...
Is this something you’ve seen in other titles or maybe fixed in >UE5.3? My current workaround is to set r.Streaming.MipBias to 0 for all scalability levels. As I understand it will cause streaming to be reported as “over budget”, but pool sizes will still be respected. Is there a better workaround I could do?
Thanks,
Gábor