Work From Home: Cloud DDC (experimental)

I just went through quite the journey getting this to work, specifically with step 3. I’ll try to help out with some hints here, but still some rough patches to solve.

  • I could not get RunUAT.bat UploadDDCToAWS to recognize AWSSDK.Core so was getting the same exceptions as Ranisz. My workaround was to literally run it through the debugger with AutomationTool as my active project. Also this became necessary because I was editing cs files to add additional logging while troubleshooting so I needed to build it anyways. It just worked that way. I still need to figure out why the bat file can’t find it.
  • The credentials file is indeed a simple text file following the link mastercoms posted. You can call it whatever you want like: aws-credentials.txt. Looks like this:
    [default]
    aws_access_key_id = YOUR_AWS_ACCESS_KEY_ID
    aws_secret_access_key = YOUR_AWS_SECRET_ACCESS_KEY
    
    [MyProject]
    aws_access_key_id = ANOTHER_AWS_ACCESS_KEY_ID
    aws_secret_access_key = ANOTHER_AWS_SECRET_ACCESS_KEY
  • You can add Log.TraceInformation(string.Format("AWS File Upload Request:\n BucketName: {0}\n Key: {1}\n FilePath: {2}", BucketName, Key, File.FullName)); right before PutObjectRequest request = new PutObjectRequest(); in UploadDDCToAWS.cs in UploadFileInner – as it gives you specific diagnostic info for each upload attempt.
  • An example command: UploadDDCToAWS -Manifest="d:/path/to/manifest" -FilterDir="d:/path/to/project/Saved/DDCForAWS" -Bucket="mys3/sharedDrive/s3ddc" -CredentialsFile="path/to/aws.credentials.txt" -CredentialsKey="MyProject" -CacheDir="path/to/shared/ddc" -Submit
  • If you see any upload exception containing The specified bucket is not valid. then you need to lose your prefix on your -Bucket input. No s3://, no https://, etc. That took some trial and error to solve.
  • I found it useful to download the AWS CLI. On Windows, it seems like an install, but all it does is extend your command line.
  • And here’s a cheatsheet to AWS CLI commands. I discovered commands are quite picky. If you do a command like aws s3 ls s3://mycompany/sharedDrive/s3ddc/bulk it’ll just report PRE bulk/ which is useless. You need to append / to get the actual contents of the directory. In my case 39 buckets of roughly 100MB files.

We have a custom engine version based off 5.0.1, but looks like the system hasn’t changed at all since this was initially introduced near the beginning of covid.