HordeAgent job driver has ~40s deadtime at start of job when not running in AWS

Horde agents has a ~40s “deadtime” at the beginning of the first node executed on an agent. This is evident from Horde UI and when watching things happen in real-time or via OTel, but not in the logs; see attached screenshot.

After some investigation, I’ve determined that this is due to JobDriver attempting to set AWS-related environment variables:

`// JobExecutor.cs

// TODO: These are AWS specific, this should be extended to handle more clouds or for licensees to be able to set these
newEnvVars[“UE_HORDE_AVAILABILITY_ZONE”] = Amazon.Util.EC2InstanceMetadata.AvailabilityZone ?? “”;
newEnvVars[“UE_HORDE_REGION”] = Amazon.Util.EC2InstanceMetadata.Region?.DisplayName ?? “”`These properties call AWS SDK methods which makes HTTP requests that only work from within EC2. With built-in retries and caching, when not running within AWS, attempting to fetch these two values causes ~40s of inactivity on the first node running on an agent, and ~16s for every subsequent node on the same agent.

Hey Yang,

Thanks for this! I’ll see if we can conditionally execute this based on the execution context (that is, don’t do anything when !AWS).

Kind regards,

Julian