LOD Bias on Texture Sources not Reflected in Memory

Summary

Applying LOD bias to texture source assets in the content drawer does not impact the memory consumed by the project.

Since LOD bias limits the maximum resolution a texture is ever displayed at, this should be reflected in a lower memory consumption. Since the meory calculation changes, it seems like the memory calculation only checks whether the texture has mipmaps, ignoring LOD bias

Please select what you are reporting on:

Unreal Editor for Fortnite

What Type of Bug are you experiencing?

Memory

Steps to Reproduce

  1. Import 1024x1024 textures into the content drawer
  2. Use the textures in the level
  3. Launch session, take note of consumed memory.
  4. Apply a LOD bias of 2 to all textures to set the max displayed resolution to 256x256
  5. Launch a new session, take note of consumed memory

Expected Result

Consumed memory is reduced after applying a LOD bias of 2 to all textures

Observed Result

Consumed memory is unchanged after applying LOD bias to texture source assets.

Platform(s)

Windows

Thank you for the detail, we’ll get someone to take a look.

1 Like

Is this still happening @OohWowow ?

FORT-1023882 has been added to our ‘To Do’ list. Someone’s been assigned this task.

I am seeing some change in memory after applying LOD Bias now. The change is lower than I expected based on how aggressively I am LODing the textures though.

This is anecdotal, and depends largely on project specifics, but I applied a LOD bias of 10 to all textures being used in a level at 130k memory, I saw a reduction of less than 1000 memory. We are making heavy use of custom assets, so I would expect a larger impact.

These results are related to the live-edit memory thermometer after pushing changes, not based on memory calculation. As changes are pushed, there is some variability in the reflected memory. The amount that changes after applying LOD bias is within this margin of error.

After calculating memory, the editor often gets stuck on the “collecting package dependency information” stage for a very long time, or crashes the editor. This make it difficult to get an accurate comparison between two memory calculations within the same session

I did some more testing on a much lower-memory project, and was able to get multiple memory calculations to complete in the same session. I tested applying a LOD bias of 10 to all textures used in the level, to represent the most memory that could be saved by applying LOD bias.

After pushing this change, and calculating memory again, I saw memory increase by 5,000. I re-launched session with these same changes, and the memory remained 5,000 above the version without LOD bias applied.

It is possible that I am misunderstanding the impacts of applying LOD bias, but I was under the impression it was to non-destructively reduce the maximum resolution used in game for the sake of performance and memory.

Hi @OohWowow

There is a lot of information in

and in

Memory Management in Unreal Editor for Fortnite | Fortnite Documentation | Epic Developer Community

but cannot find anything about

LOD for Fortnite

Hi @OohWowow,

Apologies for the delay in response. The memory calculation doesn’t currently account for streamed data, as the amount of texture memory streamed in can vary based on pool size, the amount of other streaming textures present, how close you are to an instance of the texture, etc.

Therefore the calculation only counts the baseline amount of memory used by the texture - this can be high if the texture isn’t set up to stream, but should otherwise be quite low if it is. Changing LOD bias means that there are already mips generated, which should be streamed, and this is unlikely to have an effect on the value used by the memory calculation process.

If you are interested in how much memory your textures are streaming at runtime, I recommend checking out the Spatial Profiler, as this offers a selection of memory metrics which can tell you the impact your LOD bias changes will have (if any).

Hope this helps!