We’re looking to reduce the memory usage of our dedicated server. I noticed 32 MB is going to OOMBackupPool, which comes from WindowsPlatformMemory.h here:
static uint32 GetBackMemoryPoolSize()
{
/**
* Value determined by series of tests on Fortnite with limited process memory.
* 26MB sufficed to report all test crashes, using 32MB to have some slack.
* If this pool is too large, use the following values to determine proper size:
* 2MB pool allowed to report 78% of crashes.
* 6MB pool allowed to report 90% of crashes.
*/
return 32 * 1024 * 1024;
}
I was wondering:
Did you do any testing on a server? I’m wondering if this can be lower there since it uses less memory overall.
If we lowered this and missed, say, 10% of crashes, are those going to be fairly random, or are we likely to miss entire categories of crashes? I.e., missing a random 10% would be okay, but missing certain types of crashes every time could be bad, as we’d never know about those bugs.
[Attachment Removed]
Those numbers date back to 2016 so it’s unclear if they are still relevant. The comments on the percentage can be misleading. It only applies on the process running out of memory so you the loss crash info is likely a much smaller percentage when looking at the global crashes. When running out of memory, the pool is freed so that the following code can run:
GMalloc->DumpAllocatorStats : Logs the stats of the allocator when the procress runs OOM
FCoreDelegates::GetOutOfMemoryDelegate().Broadcast() : This will only run code if the process is collection the LLM info(FLLMCsvWriter::FlushOnCrash)
In the case of servers, you are likely trying to increase the number of processes per physical server. If one of the process ever runs out of RAM, you would need to collect the state of all the processes to understand what happened. It’s unclear if the information would be useful in the current state so you will probably be ok with losing a small percentage of OOM reports.