Build Machine specifications

Hi,

We want to create a very strong build machine setup that will provide us with good iteration time of building the game to multiple platforms (PC & Consoles).

We’d like to know what kind of setup we should go for:

a) server-grade machine that runs multiple VMs with huge multi-threading power, easy scalability, but worse single thread

b) group of classic PCs with strong single thread and bad scalability

c) a mix of those

Our project is going to be a big project with many 3D assets and a big streamed world.

I’ve read this article: [Build Machine Specifications [Content removed]

But it’s from 2021 and I wonder if anything changed with the new Zen Server infrastructure changes.

So basically - we’d like to go for the server machine but we wonder if single thread won’t make us regret it by becoming the bottleneck of building process.

[Attachment Removed]

Steps to Reproduce[Attachment Removed]

Hi there,

Some of this will depend on the types of projects and on the budget.

You wil ldefinitly find a benifit running Zen server specially for iteration times for your team members, as well as the ability to deploy quickly to consoles with Zen Streaming (https://dev.epicgames.com/documentation/en\-us/unreal\-engine/how\-to\-use\-zenserver\-streaming\-to\-play\-on\-target\-in\-unreal\-engine)

For geneal workflow a great reference is ther epic way doc - Setting up an Unreal Engine Studio the Epic Way | Tutorial.

Definitly consider, horde with UBA (Unreal Build Accelerator), optionall with some type of build cache, along with your zen server cache.

As for the hardware setup.

I think the basic rules still apply, as much ram as needed, and then as many cores as you can get.

Single threaded performance can impact compile times, but also having more cores.

That being said, spreading those cores over multiple machines can be useful, and have UBA spread the build over your build machines, as well as being agents for your developers.

If scalability is a requirment definitly take that into consideration.

In the past faster core performance has usually been the priority on projects, as it can have a big impact on build times.

With the server grade hardware you are looking at, how much worse is the single threaded performance?

How many builds are you anticipating running?

Are you running builds of of every commit, how many commits are you anticipating per hour?

Ideally have builds out in 30 mins is really useful both for QA, but also your non develeopers that may be reliant on editor builds.

Usually in a CI/CD enviroment faster builds are nice but the difference between 20 min build times vs 30 might not be as impactful.

[Attachment Removed]

Hi,

Our project is an open world RPG game. You can think about Oblivion Remastered as a reference for scale of the game.

We are thinking about AMD Epyc CPU’s that will have 60-70% single thread performance of Intel Core Ultra 9 285K, but with many cores (128 or 192) and easy scalability.

How many builds - it depends, but let’s say we’d like to make ~30 builds daily, where we’d like to have as fast single build time as possible, since it will affect iteration time in some cases.

Builds of every commit - probably that would be too often, we’ll have specific triggers or manually run builds.

30 minutes build time sounds good to us.

[Attachment Removed]

Hi there,

Its hard to give a definitive response given projects can be different.

From my personal experience, I’d go with the Epyc as long as its not one of the older ones, slower single thread speed but much higher memory bandwidth and core count.

Which is helpful for builds.

And a good measure is 1.5/2GB RAM per core for compiling, for cooking, its very much project dependant, as much as is needed for stable builds.

Kind Regards

Keegan Gibson

[Attachment Removed]

That depends on what you’re talking about building. 30 editor builds daily is easy, with a couple hundred cores available to Horde, and UBA, and UbaCacheServer, you can easily have many CI editor builds done in 10-minutes, and doing every code commit is not at all unusual. I’ve got builders doing CI editor builds actually in under 7 minutes, since they are doing incremental builds, and they are actually limited by their disk speed.

If you’re talking about 30 complete cooks of an entire very large game, with packaging, well, that’s an entirely different question, unless Zen caching of cooks, which is something that I’m working on getting up to speed on, makes for a different world than before Zen. I’ve got a project that I’ve just been handed that currently will not build in Horde, because Horde has a maximum limit of 24 hours per task runtime. I’ve only got about 80 cores on that project right now, I’m getting more, but it takes a *while* to clone monster sized perforce depots to spinning disks, and monster sized NVMes are not as readily and inexpensively (relatively) available as they were a year or so ago.

The last project that I had published, had a development build time around 8 hours, and a production build time around 18 hours. And that’s with lots of fast cores backing it all.

Honestly, as fast storage as you can get, and as many cores as you can get, with as much RAM as is necessary to feed those cores (1.5Gb+ per core), seems to be the best thing to do,

[Attachment Removed]