How to get the build system to use more threads? (4.27)

I just upgraded to a new i9-12900k based system with 32gb ram and some fast ssd’s - on my first build of the engine from source I see that it is only maxing out 8 of the 24 hardware threads. I am using windows 10 pro, not 11, so I was wondering if this is some byproduct of the new processor’s asymmetric core architecture but either way - I’d like to convince the build system to peg at least 8 more threads.

I’ve tried messing with the buildconfiguation.xml to set max processor count to 24, and that gets the output to say building with 24 processes, but resource monitor still just show 8 cpu’s maxed out.
v4.27.1
ideas?
Dave

And did you try → Menubar->Tools → Options → Projects and Solutions → Build and Run → maximum number of parallel project builds (what Version of VS is is it u’re using?)

no – will try that now. Thanks, DR
(using VS community 2019)

nope-says 24 processes, but according to resource monitor only 8 cpu’s are maxed…

Strange… :thinking::thinking::thinking::thinking:

Might be an issue with the core architecture of these CPU’s. You can try having a look through some of the settings here. UBT uses it’s own settings independent from the settings within Visual Studio, but by default IIRC it is setup to use all available cores.

As an addendum, 32GB of RAM is definitely not enough for 24 threads anyway, so opening up more threads may result in occasional build failures due to running out of heap space. Every instance of cl.exe has it’s own copy of the PCH in memory, so once it grows above a certain size (which it frequently does in large UE projects) heap space becomes a real problem. (I’m no compiler expert but that’s my understanding of it).

My advice currently is at a minimum, 2-3GB per-thread, leaving you some overhead for the OS too.

thanks for the reply! I’ve been watching the memory and in-use doesn’t scrape up over 18 currently. I do see how if all 24 threads fired it could be a problem. Good safety tip. I guess I should be shooting for something more like 16 in flight at once. That would be excellent too.

The link you referred to is where I got the information on the buildconfiguration.xml – and the print outs in the build output show that it is reading it, I just don’t see a difference in what it really does. Maybe a problem inherent to the new CPU architecture with P-cores and E-cores confusing the issue.

Will try some more tinkering later.
DR
(may require pulling out the shovel and digging into the build system itself to see what’s what)

Update for anyone else finding this thread. I took my build time from 3700 seconds to 1500 seconds by going into the bios and disabling all e-cores. After that the build system ran at 16threads maxed out.

I hope we get some updates and scheduler work to make that performance available with the entire cpu operational. Maybe in windows 11? I’m not ready to beta-test that with my work yet.
DR

Hello do you have some news about this Problem ? I just got a 12900k too and Unreal is using only the 8 e-core for building light, and shaders. That’s quiet sad because I would like to use the 16 core/24 threads for max perf.

1 Like

I also cross-posted this to the Unreal developer network and had a bit of a short back and forth. The latest response (6 days ago)
Hey Dave,

Thanks again for this. We’re looking at making some changes to the process priorities mentioned above to possibly alleviate the issue a bit. If/when we do make changes I’ll be sure to let you know.

Have a great week!

Branden

So, no solution from Epic yet. But I expect they are looking at it. I went in to my bios settings for the motherboard and have disabled all the e-cores. That took my full rebuild from 3700 seconds to 1500 seconds. Worth doing. I’m just running with all e-cores disabled for now.
DR

2 Likes

Hi thx for answer. You can check this page too if you want, it’s a great article about the 12th gen.
A solution is to use “Process Lasso” free software to force win 10 or 11 to use all cores. I tested and it works for me.

I believe there are some settings in baseengine.ini that you can use to adjust the number of processes used for compiling shaders. It’s not entirely clear how they work, though.

[DevOptions.Shaders]
; See FShaderCompilingManager for documentation on what these do
bAllowCompilingThroughWorkers=True
bAllowAsynchronousShaderCompiling=True
; Make sure we don't starve loading threads
NumUnusedShaderCompilingThreads=3
; Make sure the game has enough cores available to maintain reasonable performance
NumUnusedShaderCompilingThreadsDuringGame=4
; Batching multiple jobs to reduce file overhead, but not so many that latency of blocking compiles is hurt
MaxShaderJobBatchSize=10
bPromptToRetryFailedShaderCompiles=True
bLogJobCompletionTimes=False
; Only using 10ms of game thread time per frame to process async shader maps
ProcessGameThreadTargetTime=.01
; For regular machines, wait this many seconds before exiting an unused worker (float value)
WorkerTimeToLive=20
; For build machines, wait this many seconds before exiting an unused worker (float value)
BuildWorkerTimeToLive=1200
; Set process priority for ShaderCompileWorker (0 is normal)
WorkerProcessPriority=-1

; These values are for build machines only currently to reduce the number of SCWs spawned to reduce memory pressure
bUseVirtualCores = False
; CookerMemoryUsedInGB = 49
; MemoryToLeaveForTheOSInGB = 3
; MemoryUsedPerSCWProcessInGB = 0.4
; MinSCWsToSpawnBeforeWarning = 8

; Use SCW memory pressure calculations regardless of whether cooking is done on a build machine
; Note: CookerMemoryUsedInGB, MemoryToLeaveForTheOSInGB, MemoryUsedPerSCWProcessInGB must all be set to enable
bForceUseSCWMemoryPressureLimits = False
1 Like

Hi! I’m pretty sure this isn’t your issue but just in case: in the past Incredibuild has been a vector for this type of problem.

When not fully licensed for all your CPU cores, completely uninstalling it has solved it for me in a couple instances.

Thanks - that’s not the case I’m looking at. The build team at epic are looking at it.

Hi Dave.
Recently I had installed Unreal 5 also on 12900k and found the same problem.
After reading this thread and some other info about win10 problems (hopefully solved in win11) with scheduling on P/E cores I think I found some solution for Win10 (I don’t know what about win11, it’s still “get ready for…” in updater)
The problem seems to be a default priority of ShaderCompileWorker threads, it was set to below normal (-1), so Win10 treat them as a nice background process and assign them to E-cores only.
First I tried to change priorities manually in task-man’s detail view and it worked.
So i changed the following settings in BaseEngine.ini:
[DevOptions.Shaders]

ShaderCompilerCoreCountThreshold = 12 ; default 12 which now cause about 60-70% whole CPU load during shaders compiling process. I expect it will be 100% after change to 24.

WorkerProcessPriority=0 ; ← Most important here, default was -1

And now all cores are even balanced with total utilization depending on worker threads number.

1 Like

Nice find! Did you find a similar fix for C++ builds? I’ve been continuing to use “process lasso” (free version) to bump the priority up to normal for all cl.exe processes and shadercompiler worker ones, but it would be good to take that out of the mix if possible.

DR

I’m having this exact problem with a 12900k as well. Any updates from EPIC?

EDIT: I can confirm that setting the process priority of cl.exe (and link.exe and ShaderCompileWorker.exe) to “Normal” instead of “Below normal” in Process Lasso fixes the issue.

For anyone who is building Unreal themselves and doesn’t mind making changes in the UE build tool.

In ParallelExecutor.cs, the ManagedProcess is created with BelowNormal priority. Change this to your liking if you use local parallel builds.

static void ExecuteAction(ManagedProcessGroup ProcessGroup, BuildAction Action, List<BuildAction> CompletedActions, AutoResetEvent CompletedEvent)
{
	try
	{
		using (ManagedProcess Process = new ManagedProcess(ProcessGroup, Action.Inner.CommandPath.FullName, Action.Inner.CommandArguments, Action.Inner.WorkingDirectory.FullName, null, null, ProcessPriorityClass.BelowNormal))
		{
			Action.LogLines.AddRange(Process.ReadAllLines());
			Action.ExitCode = Process.ExitCode;
			Action.ProcessorTime = Process.TotalProcessorTime;
			Action.ExecutionTime = Process.ExitTime - Process.StartTime;
		}
	}
	catch (Exception Ex)
	{
		Log.WriteException(Ex, null);
		Action.ExitCode = 1;
	}

	lock (CompletedActions)
	{
		CompletedActions.Add(Action);
	}

	CompletedEvent.Set();
}
1 Like

Check this out. I had the same problem. Currently my games compile 2-10 seconds without any issue even first run itself as well.

1 Like