Hi there,
I’m going to preface my response with two main Knowledge base articles which I’ve put together that highlight practical debugging for both Horde and UBA:
- https://dev.epicgames.com/community/learning/knowledge\-base/oWG6/unreal\-engine\-practical\-debugging\-setup\-tips\-for\-horde
- https://dev.epicgames.com/community/learning/knowledge\-base/jB32/unreal\-engine\-fortnite\-practical\-unrealbuildaccelerator\-debugging\-tips
>Does the build accelerator sync all local changes to the remote agents when they are assisting with builds?
- In short - yes; in long - it’s nuanced. UBA uses detours which will effectively ‘make sure all inputs required to complete a remote action are delivered’
>Console devices (etc)…
- We are on an internal network here at Epic, so our devices & users are going to be in the same network/subnet.
- Any agent that is therefor executing a build (and subsequent test on device via Gauntlet) would need to be able to have access to that machine via the specified IP
- This also is true for users who are just reserving a machine
- For next steps, we *sort of* allude to it from the DeviceManager page, but you’d most likely want to lean into the Gauntlet integration as the defacto path forward
- Gauntlet will use the reservation API to acquire the device (of which, our device manager support is more or less a key value pair of devices along with metadata around their in-use state vs not).
- The Gauntlet integration subheading has the details on the buildgraph usage
>When it comes to giving more than 1 person compute access for build acceleration, I assume you would just add them to the array in the config file along side each other?
- Yes, that should be manageable at that scale. Beyond that you should be able to use an OIDC claim for this.
> Yesterday, I also did the configuration for analytics, and I could see the analytics chart graphs and stuff, though they were not populated. I did confirm our editor was sending data to it though, so I assumed maybe there is a periodic digestion of that data in order to populate the charts.
- There shouldn’t be an elongated period for digestion; it does matter how your metrics are configured (and which telemetry is being pushed into that metric)
- There are some slightly confusing metric aggregations that depend on Job Step name etc in the metrics ‘query’ - meaning they won’t exactly get consumed if that metadata isn’t missing. When you get to a specific metric or view you’re looking at, we can investigate a bit more.
For your authentication, are you just using the Horde auth?
- I can dig a bit more on my end with QueryMetrics via Horde auth
Julian