Issues with some Virtual Textures refusing to stream ahead of close-up shots in a Level Sequence

Hi all,

We’re experiencing issues across our game trying to get Virtual Textures to pre-stream ahead of camera cuts in Level Sequences, in order to avoid the clearly visible pop-in of texture quality. Sometimes we can successfully get these textures to stream in ahead, but sometimes no matter what we try there is still the same blurry texture for the first few frames of the new shot. It most commonly seems to exhibit on character skin virtual textures (in the ‘Character’ texture group), but we also see it happening on some normal objects using the ‘World’ texture group.

We are currently focusing on PS5 so that is where our testing is showing this, but I expect it’s likely happening beyond this also. We are on UE 5.6.1 with no ability to upgrade to 5.7 or beyond but we can cherry pick individual changes if necessary.

This is a list of all the things that we have attempted to use to get these objects streaming in at a decent quality:

IStreamingManager::Get().AddViewLocation ahead of the material/texture streaming - as suggested by Alex Peterson in [Content removed]

GetRendererModule().PrefetchNaniteResource with the mesh render data’s nanite resources - this seems to reliably work to avoid seeing the nanite minimum residency at least

GetRendererModule().RequestVirtualTextureTiles on the render proxy for the material itself, using ScreenSpaceSize as the current viewport size and feature level of GMaxRHIFeatureLevel - this seems to be how MakeHLODRenderResourcesResident referenced by Jeremy Moore in this UDN post operates [Content removed]

On each texture in the material we have tried calling all of the following:

SetForceMipLevelsToBeResident

Setting bForceMipLevelsToBeResident as true - we turn this back off again later

Calling StreamIn on the texture with the max number of mips and bHighPrio as true (this appears to call GetRendererModule().LockVirtualTextureTiles on the VirtualTexture2DResource) - we call StreamOut with 0 mips later

Acquiring the allocated VT on the VirtualTexture2DResource and calling the renderer module’s RequestVirtualTextureTiles on this, using ScreenSpaceSize as the current viewport size, a ViewportPosition in the middle of the viewport, a UV0 of 0,0 and UV1 of 1,1. We have tried passing in a mip level of the max number of mips and 0 (which seems to be the desired value).

We then call the renderer module’s LoadPendingVirtualTextureTiles with a feature level of GMaxRHIFeatureLevel

Setting the texture with ‘Virtual Texture Prefetch Mips’ at the highest mip level

Turning up the ‘Virtual Texture Streaming Priority’ on the texture

We have also tried doubling all our values for the following console variables just to see if that made any difference (but no change was apparent):

r.VT.MaxTilesProducedPerFrame

r.VT.MaxUploadsPerFrame

r.VT.MaxUploadsPerFrame.Streaming

r.VT.MaxUploadMemory

r.VT.MaxUploadRequests

To try and lighten the load on the system (in case that could be the issue), we have tried requesting two levels below max from the ‘Virtual Texture Prefetch Mips’ setting and from LoadPendingVirtualTextureTiles.

When we make these pre-streaming requests we do see a jump in the residency graph’s ‘Page Residency’ and ‘LockedPage Residency’ lines, but this drops back down to the previous level after about 10 frames. We tried ensuring that we only request the textures 10 frames ahead of being displayed, but with no success - the textures still appear blurry on screen while the residency graph line is still at its peak. So while we were worried that the locked pages might not be kept around, it does not seem to work either if we ensure the textures are displayed while the graph suggests they are still locked.

We are all out of ideas, so any advice you can give us on how we can try and help these virtual textures get loaded in before we cut to a close-up of them would be greatly appreciated!

Thanks,

Tom

[Attachment Removed]

One further aside as I ran out of the character limit for my main post:

In some instances where we are seeing these issues of being unable to load textures, the r.VT.Residency.Show graphs are seeming to be pretty close to the limit and in others there seems to be plenty of space. I will note that we are seeing our BC7 pool display with a clamped limit of 254MB - this seems like it may be due to the size exceeding GetMax2DTextureDimension in GetPhysicalSpaceExtraDescription in VirtualTextureSystem.cpp. We had previously seen the pool sizes respect our configured size of 320MB for this, so has something possibly changed recently to limit these pools in this way? We tried enabling the SplitPhysicalPoolSize cvar in case this allowed multiple separate pools to make the total pool larger but this seemed to immediately result in a crash. This seems to not necessarily be the cause of our streaming issues here anyway, but I wanted to mention it in case it’s contributing to some of the instances we are seeing.

[Attachment Removed]

Hi Tom,

We have a plugin called CinematicPrestreaming which is intended to help with your use case. Up until now it has been Experimental and not recommended for production. But we have been making improvements and bug fixes with the intention of moving it to a Beta state in 5.8.

The plugin works by using the Movie Render Pipeline to record the virtual texture and nanite pages used in a shot. This generates an asset that can be placed in a sequence to playback the page requests some frames ahead of time. As well as automating the process of preloading, this helps minimize the preload to only what is seen (not the full UV range).

There is some very basic documentation here:

https://dev.epicgames.com/documentation/en\-us/unreal\-engine/cinematic\-rendering\-export\-formats\-in\-unreal\-engine\#prestreamingrecorder

One thing that documentation doesn’t cover is the need to add a Pre-roll and set “Start Frame Offset*”* to prestream assets placed in the sequence. A value of 20 frames for both is usually good.

Below are a list of changes that you would need to integrate to have the latest functionality. Unfortunately some of the later changes which impact the virtual texture code are quite big.

https://github.com/EpicGames/UnrealEngine/commit/696c6cfbba903c24d3ad5287780c29b62d793d8b

https://github.com/EpicGames/UnrealEngine/commit/c57070db4250681850f3807cf11d1998dd44f1c5

https://github.com/EpicGames/UnrealEngine/commit/7aa7c479a834ad4e4c0235251d3718030fb77424

https://github.com/EpicGames/UnrealEngine/commit/5c8a0f0594f95bc02ac1ed84816b849038fd23a9

https://github.com/EpicGames/UnrealEngine/commit/ec549f79391f668c554c8f42e8ba9257d95f562d

https://github.com/EpicGames/UnrealEngine/commit/92980b6c76f4815794284f73e59f04dc29ef8f82

Also it might be worth integrating a couple of virtual texture fixes which impact the speed of streaming and general performance:

https://github.com/EpicGames/UnrealEngine/commit/a99d242efb1145ad99eab9616386d7562aad5a28

https://github.com/EpicGames/UnrealEngine/commit/f726eae62965f3e8c8889bbb6326b2f34ad39f77

https://github.com/EpicGames/UnrealEngine/commit/af33d711c33d49ef5e4d19eec40eaea27b834b12

For the issue where your pool is now clamped, I think it came from this bug fix which is in 5.6:

https://github.com/EpicGames/UnrealEngine/commit/c322964f9acc30db1fd7181a102ad9a904946ee7

Best regards,

[mention removed]​

[Attachment Removed]

Hey Jeremy,

Thanks for responding! I don’t think there’s any safe way that we could incorporate all these changes and set up a new plugin and get it completely set up and implemented at this incredibly late stage we’re at in development. Certainly incorporating those larger changes without any real understanding on our end of the impacts or any knowledge of what further changes they might actually require could be incredibly dangerous at this point.

I’ve given those three virtual texture fixes you mention a try locally and am unable to see any difference sadly.

I’ve also tried undoing that final bug fix change you mention and I am still seeing our pools being clamped at 254MB. I think if we were to unclamp the pool size at this point that could result in us suddenly using up more memory than we have been testing with recently, so while it would be nice to know what has happened to cause this I’m not sure if it’s something we can safely expand again at this point.

Thanks,

Tom

[Attachment Removed]

Just posting a reply to make sure this doesn’t get closed, any further ideas would be much appreciated!

[Attachment Removed]

Just wanted to mention that I integrated this into 5.7.4 and the merge was pretty straight forward for all of this.

Using this *pre-streaming* also seems to fix this weird issue we see without it in 5.7.4:

so I uploaded a video in case anyone was seeing the same issue (this issue was only happening inside MRQ and not in game)

[Attachment Removed]

Not to hijack this thread again *but* the cinematic prestreaming stuff has been working great! However, I did encounter something odd in case anyone else attempts to merge another change related to virtual textures.

[Image Removed]

I merged this changed *to* test if it fixes something else for us, but after merging this change, the editor gets stuck into an infinite loop the moment there is cinematic pre-streaming asset added into the level sequence when it is doing FUniquePageList::Add:

void FUniquePageList::Add( uint32 Page, uint32 Count )

{

uint32 HashIndex = MurmurFinalize32(Page) & (HashSize - 1u);

uint32 NumCollisions = 0u;

while (true)

{

uint32 PageIndex = HashIndices[HashIndex];

if (PageIndex == 0xffff)

{

if (NumPages < MaxUniquePages)

{

PageIndex = NumPages++;

HashIndices[HashIndex] = PageIndex;

Pages[PageIndex] = Page;

Counts[PageIndex] = Count;

}

break;

}

else if (Pages[PageIndex] == Page)

{

const uint32 PrevCount = Counts[PageIndex];

Counts[PageIndex] = FMath::Min<uint32>(PrevCount + Count, 0xffff);

break;

}

HashIndex = (HashIndex + 1u) & (HashSize - 1u);

++NumCollisions;

}

#if DO_GUARD_SLOW

MaxNumCollisions = FMath::Max(MaxNumCollisions, NumCollisions);

#endif // DO_GUARD_SLOW

}

Not sure if this is a known issue [mention removed]​ but figured I’d let you know, just in case anyone else tries to merge it, but its possible that maybe some other CL is also needed to make this not freeze

[Attachment Removed]

That change:

https://github.com/EpicGames/UnrealEngine/commit/79d9e708b04940e3a564024b2f2ec7084f3cf4f1

was closely followed by a crash fix that could well be related:

https://github.com/EpicGames/UnrealEngine/commit/ed249f18a8a5fd5ed4be5ec5dcbde203b3bb81f1

[Attachment Removed]

Tom Goodwin: For the original question, if you can’t make the change to using Cinematic Prestreaming, then my first recommendation would be to focus on this one of your original attempts:

> GetRendererModule().RequestVirtualTextureTiles on the render proxy for the material itself

I think I would try that so that:

* You focus on a subset of hero materials (the character ones maybe?)

* You try and set things up so that you are only prestreaming to some sufficient mip level where the visual popping is minimized but the physical pools aren’t overwhelmed

* You trigger the streaming sufficiently in advance and continue to call RequestVirtualTextureTiles() each frame during the prestreaming period.

Within that system I would use “stat virtualtexturing” to validate that the throughput of virtual texture pages is reaching the expected high amount. If you see only a few pages being updated in a scenario where you are requesting many then that might be something to investigate. Note that for this, one of the changelists mentioned above added some additional stats.

https://github.com/EpicGames/UnrealEngine/commit/a99d242efb1145ad99eab9616386d7562aad5a28

Best regards,

Jeremy

[Attachment Removed]

Hi Jeremy,

Thanks for your response. I can confirm that calling RequestVirtualTextureTiles every frame with a slightly lower mip level does indeed seem to get this working! Sometimes we need to trigger this a little further in advance where there are a few different objects that we need to stream in but this all seems to be working pretty well for our purposes so far.

Thanks,

Tom

[Attachment Removed]

Yup that fixed it, much appreciated

[Attachment Removed]