Cloud DDC MongoDb

I’m trying to stand up a Cloud DDC instance in AWS and am trying to avoid Scylla, as we won’t need multiple regions. Reading the online docs and the readme.md in source control, it seems like this is supported, but with my current deployment, it terminates because the scylla properties aren’t set:

'LocalDatacenterName' field is required
'LocalKeyspaceSuffix' field is required

Is the expectation for just using Mongo to set those and not the scylla connection string? Or is there a config option I’m missing to bypass Scylla?

Hey Jim

Yes Mongo is available though its a fairly untested path as you are noticing. The intention is for that to mostly be useful for small licensees that know they will never need multi region (if you think there is ever a chance you would want it we recommend setting up a single region scylla cluster, as that can be migrated to multiregion while Mongo can not be).

Its not intentional that you have to set those values as they are Scylla specific. Looking at the tests we have that use Mongo we do seem to set though (mostly because we want those docker compose scripts that we set this up in to allow for switching between mongo and scylla easily). So its very possible this is a bug and you should just set them to some dummy value to get around the config verification we are running. Without having your configuration file or logs its hard to tell if you are missing a option to bypass Scylla, that is also possible (the log output when starting the pod should list which Implementation of our different stores that are in use, none of those should reference Scylla).

Right, the information I was thinking of was the values of the different implementations in UnrealCloudDDCSettings but I realize that we do not actually echo those to the log at any time.

So I think the easiest is if you check your config files (config map if you are using helm & kubernetes) and send me the settings you have under “UnrealCloudDDC”

should look something like this:

```

UnrealCloudDDC:

BlobIndexImplementation: Scylla

BuildStoreImplementation: Scylla

ContentIdStoreImplementation: Scylla

LeaderElectionImplementation: Disabled

ReferencesDbImplementation: Scylla

ReplicationLogWriterImplementation: Scylla

ServiceDiscoveryImplementation: Kubernetes

StorageImplementations:

- FileSystem

- S3

```

We would really be looking to verify that none of those says Scylla in your case and that you have overrides for all of the implementations that are set to Scylla by default in the appsettings.Production.json

Thanks for the reply, Joakim! Here’s pertinent log info:

DataAnnotation validation failed for ‘ScyllaSettings’ members:

- ‘LocalDatacenterName’ with the error: ‘The LocalDatacenterName field is required.’

- ‘LocalKeyspaceSuffix’ with the error: ‘The LocalKeyspaceSuffix field is required.’

The termination is for OptionsValidationException

And nothing about Mongo anywhere :frowning:

Ahh! Should all the Scylla’s be changed to Mongo (or MongoDB? not sure what the syntax is) - or should some be “Memory”?

I’ve managed to get through all that, and set up Okta auth but PUTS all seem to be failing:

LogDerivedDataCache: Display: HTTP: PUT https://<myserver>/api/v1/refs/ddc/legacyskeletalmesh/7ce2a017a453f0745f87b8004d27ade92ca8d30a -> 403 (sent 44 bytes, 0.038 seconds 0.000|0.000|0.000|0.038) Content type ‘*/*’ of size 0

LogDerivedDataCache: Display: Cloud: Failed to put reference object for put of LegacySKELETALMESH/7ce2a017a453f0745f87b8004d27ade92ca8d30a from ‘/Game/<Redacted>/<Redacted>/Meshes/<Redacted>.<Redacted>_M’

On the server, I’m seeing:

“Authorization failed. {Reason}”: “These requirements were not met: Jupiter.ScopeAccessRequirement”

(Even though the editor thinks it auth’d fine)

In Okta, I’ve set up the default API server per the instructions, I have an Access Policy to enable all scopes, etc.

Finally got it up and fully running! ‘UseLegacyConfiguration’ went a far way, and then whack-a-mole with some S3 permissions to fix the 500s. Setting all this up with Terraform probably didn’t do me any favors!

Great that you got it sorted!

Will leave this open in case you run into any more issues and close it in a week or so. Let me know if there is anything more I can do to help.