Horde artifacts clean-up depends on streams order for unknown reason

Hi, we face a very strange issue where horde artifacts do not clean up properly. We found out that depending on the order of streams in project.json a different set of streams will get cleaned up. We were running out of space quickly so in order to buy us more time we swapped the streams and voila, different streams and different amount of them got cleaned up.

Looking into the code at least on the surface it doesn’t make much sense to us, also there are no errors or warnings in the logs around artifact expiration, it just silently stops checking for expired artifacts.

Do you have any idea what would cause such an issue? How would you proceed to debug this issue further?

I attached example.project.json file and a filtered out horde logs that showcase what is going on.

Here are the changes we made to ExpireArtifactsAsync() function to achieve the logs provided in the attachment.

async ValueTask ExpireArtifactsAsync(CancellationToken cancellationToken)
{
	... 
	// Expire any active streams listed in the config
	foreach (ProjectConfig projectConfig in buildConfig.Projects)
	{
		foreach (StreamConfig streamConfig in projectConfig.Streams)
		{
			// BEGIN ENGINE MOD
			_logger.LogInformation("Checking for expired artifacts in {ProjectId}:\'{StreamId}\'...", projectConfig.Id, streamConfig.Id);
			// END ENGINE MOD
			foreach (ArtifactTypeConfig artifactTypeConfig in streamConfig.GetAllArtifactTypes())
			{
				// BEGIN ENGINE MOD
				_logger.LogInformation("Checking for expired artifacts in {ProjectId}:\'{StreamId}:{ArtifactType}\'...", projectConfig.Id, streamConfig.Id, artifactTypeConfig.Type);
				// END ENGINE MOD
				await AddExpiryRecordAsync(streamConfig.Id, artifactTypeConfig.Type, utcNow, cancellationToken);
				await ExpireArtifactsForStreamAsync(streamConfig.Id, artifactTypeConfig, utcNow, cancellationToken);
			}
		}
	}
	...
}

Steps to Reproduce