Suspecting DDC is getting invalidated in unexpected way.

Hi Charles.

Thanks for the follow up.

I do recognize why it’s as slow as it is for us, and we will be correcting the virtualization issue. But it still seems less than ideal from my point of view. If a piece of a texture (one mip) is retained I’d like all related mips to be retained. It feels like if you’re regularly utilizing that texture sooner or later you may need those other mips. If we were concerned with storage space maybe I’d be fine with ejecting some of this from the cache, but it’s not a concern of ours at this time. Even if we had a shared DDC I could see this being an issue on a small team (which we are).

Thanks in advance, looking forward to hearing more.

From Martin’s earlier answers I was under the impression that it was a retention duration setting (not a cache size limitation) that was causing the mips to expire from the cache. It made it sound like it was inevitable, but we could push out the duration to delay it (say, 90 days for example). Still, probably means when 90 days comes around, people will be hitting hitches again.

One mystery to me is that if I delete my local cache folder on disk and open this level in editor it appears to cache everything it needs (flying around the level is smooth -- no hitches). If I wait a week (and mips presumably expire from the cache), level load alone does not suffice for padding out the cache. Why does level load cache all mips initially on first level load, but not on subsequent level loads?

For anyone following along with this thread, the above mentioned CL was submited to FNMain (which licensees don’t have access to), but it was robomerged to UE5/Main as:

CL 43592273

Hiya Charlie.

Sorry for re-opening this, but I wanted to report that we gave this CL a try. I was very hopeful, but it seems that it’s not helping in our case (reporting for anyone else who looks to this post in the future). There isn’t by chance any follow up CLs involved to get this working?

Additionally, any further diagnosing tips would be helpful. I’ll be slowly digging more into this, but a helpful direction would be appreciated.

I imagine even with a shared DDC there’d still be a hit for streaming mips sync’ing down at editor runtime (from a cloud DDC). I’d love to hide this all at map load time, and am very surprised this hasn’t been a discussed issue (even with less impactful shared DDC hits) in the past.

Hiya Dan,

The fact that this fix was GC/cook focused does make sense, and does explain why it didn’t help us in our situation.

I did start all this by logging what textures were needing their mips rebuilt and didn’t recognize a pattern (it was varied across the team, and varied across run-to-run; it was seemingly determined by individual users’ usage patterns). We didn’t initially realize that DDC entries for individual mips were expiring and being removed from the cache.

To reframe this conversation, putting aside idealized DDC storage location, is this an expected course of events:

  1. UserA adds a new virtualized texture, TextureX, to their level
  2. TextureX and all its mips (inlined and streaming) are built and cached in the DDC
  3. UserA loads the level a few times over the next week, but doesn’t fly around to touch all the streaming mips (mips N+)
  4. Twelve days pass, and streaming mips that weren’t touched since step #1 are ejected from the DDC
  5. UserA opens the level and flys around in the editor, hits the need for mip N
  6. Mip N is missing from the DDC cache, so TextureX’s virtualized bulk data is needed to regenerate it
  7. TextureX’s bulk data is sync’d down from perforce
  8. UserA’s editor hangs, waiting for the bulk sync / DDC rebuild

The hang can scale differently according to a number of factors (data size, internet speed, etc.), but what I am looking to confirm is that there is an expected hit (either large, or small) that the user is excepted to encounter in this scenario?

Thanks!

Thank you Dan.

We’ve added the task to look into mimicking the work you did for cooking to our backlog. Though, it’d be appreciated if you could update this thread with a CL if/when the work you mention is completed on your side.

Thanks again for helping me through all of this.

Cheers!

Appreciate you circling back Dan.

Is there an estimated completion date? I’m not going to hold you to it, but one that I can circle back at with a new UDN question if we do end up needing it (we’re trying to solve it ourselves as well, but have been struggling. It seems like things are still being ejected even when we turn up the `UnusedFileAge` field).