UBA Cache Server (Experimental)

Regarding UBA Cache Server (Experimental), I have searched for info, but cannot find anything.

I believe UbaCacheService.exe is what would be handling the cache and I had it running locally or remotely with the compilation seeming to connect, but then I always get cache misses at each compilation.

What do I need to do to get the UBA Cache Server working?

It’s been a while since I did this, as we had failure on some platforms (though that was prior to 5.6), but you need to build on a machine with the UBA Cache configured, and either the -BuildMachine or -UBAWriteCache command line parameter, or the <UnrealBuildAccelerator><WriteCache>true</WriteCache>…</UnrealBuildAccelerator> element in BuildConfiguration.xml , to fill the cache. WriteCache defaults to BuildMachineOnly, so normally your regular build processes should write to it, if they are configured to use the cache server.

Then anyone using the cache server should be able to read from it.

Keep an eye on your builds after enabling it, though, it worked great for us for a few days, and then everything exploded. That was with 5.4 or 5.5, though, when it was much less mature. Haven’t yet had a chance to try it with 5.6 or 5.7 though. I’ve had no problems at all with using it locally, where I only build Windows, but on our buildfarm, we’re building several platforms, and so I suspect that’s what caused the breakage.

Nice to hear you are testing it out.. it can give a nice boost for sure. We are using it for our entire build farm for all platforms and hosts (win/mac/linux)… We are running the actual service on linux and we have one UbaCacheService instance handling all our branches.

One important thing to point out is that if you are planning on using the cache for devs and everyone has different folders where they sync the code, then you must enable VFS, otherwise you will not really get any cache hits. We are planning on enabling VFS globally inside Epic but it is still opt-in here (the farm has been having it on for a year or so)

With vfs enabled all devs should be able to share the cache given that they are on the same sdk versions.

You enable vfs with either “-vfs” on the commandline or <bUseVFS>true</bUseVFS> inside BuildConfiguration.xml (Under BuildConfiguration category)

So, what is vfs you might say :slight_smile:

With vfs UBT generates all .rsp files and paths with fake paths (z:\uevfs\…). Then when building we tell UBA how to resolve the virtual paths back to the physical files… This means that the entire toolchain (cl.exe, link.exe etc) actually never see any other paths than z:\uevfs\. What is really cool about this is that all machines, regardless of setup have the same inputs and outputs.. which also means that everything can be fully cached. We have implemented this for all platforms (I think my colleague fixed a bug with some build actions on IOS not properly using virtual paths recently but 99% should work)

What are the cons with vfs?

First time you debug in visual studio it will not find the files so you will have to navigate manually to the first file.. after that it resolves automatically. A small price for fully cached builds if you ask me.

I recently fixed so xcode works with virtual paths and debugger breakpoints, but I do know Android debugger breakpoints is not fixed… and there might be other less used paths that are not properly working for breakpoints.

Interesting. I get cache hits and have not enabled vfs. I will do that and see if it improves things, though.

I’ve recently reconfigured our build machines and a couple of devs local machines to use this, and bLinkRemote as well, and it seems to be working quite well. There’s been a couple of link errors within the 4 days since turning it on, but overall it’s improved our average build times quite a bit. Haven’t heard any feedback from the devs yet, but they probably haven’t done many if any complete rebuilds yet.

ok, cool. keep me updated. I have just done a really big refactoring in uba to reduce number of kernel calls around CreateFileMapping/MapViewOfFile… and it has sped up my local builds a lot. I have managed to build FortniteEditor in <3m30s from clean (using 1500 helper cores from aws oregon with 13ms latency)… before this refactoring I was insanely I/O bound because of all the kernel calls so it usually took 7-8min and it didn’t help to have more than 600 helper cores.

I also always have “compressed obj files” enabled since that also reduce I/O

oh my, that is some reeeeally slow d: drive you have there. Writing ~150mb/s :slight_smile:

Based on your trace it looks like you quickly fill up your ssd cache and then the machine never catches up writing to disk. The first ~1000 files goes quite fast and you are network bound (it looks like you have 1gbps connection). then your machine just dies.

And as you say, you are sitting at max memory which most likely cause page file activity which in turn causes stuttering. Can you check how much memory UBT is actually using? I tested here with 100% cache hits and my UBT process never goes above 5gb.

oh, one thing you can do is to enable obj file compression.. that should help your scenario a lot (except the memory ofc)

Hi Henrik,

on UE5.6.1 I managed to get cache working only by command line. Now with 5.7 the BuildConfiguration.xml parameters are working.

Seeing the cache hits is exciting and cool.

I have an issue on the PC I’m using (don’t know on other PCs) that it becomes unresponsive and difficult to use until the build finishes. Memory is at 98% and disk actiivity on C and D are both often at 100%.

Is there something I can do to reduce the load on the PC?

I tried various things including UnrealBuildAccelerator CacheMaxWorkers but it does not seem to help (or perhaps work).

[Image Removed]I’m including the UBA trace file.

My post says the file is no longer available.

Trying again