Random variations in memreport data for some texture groups (TEXTUREGROUP_Lightmap)

Hello, In UE 5.5 I’m seeing some random variations in memreport data related to several texture group stats. Here is one example with TEXTUREGROUP_Lightmap: in the same build and in the same scenario, this metric shows different results between runs (around +23 MB difference in the second run).

As a result, other related metrics such as STAT_TextureMemory2D and STAT_D3D12Textures also show the same difference.

At the same time, there is no difference in the texture list, and the Total TEXTUREGROUP_Lightmap metric reports exactly the same value in both cases.

I’m attaching two memreports showing this behavior.

So the question is: is this a known issue, and does a newer UE version already include a changelist that fixes it?

[Attachment Removed]

Steps to Reproduce[Attachment Removed]

Hi Alexander,

Yes, that is definitely an odd problem. You say that this issue is not exclusive to just the lightmap texture group? Do you have a list of the other texture groups where you are also encountering an incorrect count of texture memory between runs?

[Attachment Removed]

Hello Tim

Sorry for the delay, yes, we saw the same behavior with TEXTUREGROUP_UI, here are the memreports with this case

[Attachment Removed]

Hi Alexander, sorry for not getting back to you sooner. I am just getting on top of the pile of tickets I had after the winter break. I have reached out to some members of the mobile team regarding this issue and will update you as soon as I have more information to share. If this issue needs an urgent resolution for you, please let me know, and I can try to escalate the ticket.

[Attachment Removed]

Hi Alexander,

So I have an update for you. The reason you have not heard from me in a while is that I have been trying to find an expert on our end to help diagnose your stats issue. As it turns out, we do not have an owner for this system at the moment, so it has been tough getting information for you. The best I can do right now is give you some guidance so we can work together to see if we have a memory management issue on our hands or if something is off with the stats system. From what I have been told, if you reduce your CPU core count during your profiling runs (-corelimit=4 or something similar) and run your tests, you should get more consistent output. The reason you should be seeing more consistent results is that we suspect that the stats system can, at times, run into race conditions where some stats are incremented and decremented out of order or not at all. If your numbers do not get more consistent with this test, then we have an issue unrelated to the stats system. Please let me know if you can test this theory for us and what your results are. Thanks for your patience and cooperation.

Cheers,

Tim

[Attachment Removed]

Hello Tim,

Yes, we can run our regular autotests with this parameter and check whether this improves the stability of the problematic metrics. I will let you know about the results.

Kind regards

[Attachment Removed]

Ok, sounds good! You can post something here whenever you have some data.

[Attachment Removed]