Running several editors off of the same local DDC cache at the same time

I have several build machines, each of them has several streams. They can all perform full builds. The issue is that DDC takes up about 50GB per stream per agent, which adds up to a lot quickly.
I can run them all with the same shared DDC, but that’s just another 50GB folder somewhere.
However, all build agents also have access to the same disk.

What I can do instead is create a directory junction for each DDC folder to point to the same folder on the shared drive.
This means that all agents and all workspaces would share the same local DDC. The builds would run with -ddc=noshared.

There are four situations here to consider:

  1. Two agents are trying to put a different version of the same asset to the DDC at the same time
  2. Two agents are trying to put the same version of the same asset to the DDC at the same time
  3. Two agents are trying to get a different version of the same asset from the DDC at the same time
  4. Two agents are trying to get the same version of the same asset from the DDC at the same time

The first situation shouldn’t be a problem since two assets that differ from one another will resolve to a different hash, so there is not write collision there.
The second situation shouldn’t be a problem since two identical assets will produce the same cached data, so the race condition doesn’t matter since the outcome is the same either way.
The last two are read operations which I put here for completeness sake, but there should be no concurrency issues with those either.

I am a bit worried about the second situation. Is there a scenario where the result of the clash would be a somehow corrupted cache file or some other problem?

Other than that, are there any other situations or potential problems with this setup?