I need some opinions on something we’re considering implementing.
What we’re considering doing is having a central repo server that contains the newest version of all DOT data, as well as packages detailing computations that need to be completed for the iteration to finish. These packages get spread out among the clients, and each client is told to complete a different part. Since the packages are designed to be split where data interdependence is not needed, they won’t need any of the computations from the current iteration. After a package is finished, the finalized results are spread between the clients, and then proceedingly pushed to the repo. All the clients then grab the next computation package.
So all clients contain a copy of all AI data from the previous iterations (With a Markov property of course. So if new data is available for a certain entity, the old stuff is overwritten) and computational packages. The previous iteration stuff are set up as a torrent for new clients and do avoid file inconsistencies. When a new computation is completed, the previous torrent is updated to redownload the files that have been completed (Since these files of course would have been updated).
The issue that I have is that either packages will be very small, or the clients will need to finish a downloading the new packages before they move onto the next file. The files that DOT outputs are far too large to keep entirely on RAM (6GB or so), but at the same time are too dynamic to download once when the client connects to a server.
I can’t have the server do all the computation, since the upload bandwidth requirements would be tremendous, and the algorithms utilized don’t scale well with the amount of players that are freely roaming. I can’t have players do the computation for the entities that only they effect, since entities are able to effect other entities as much as the player can. If I tried to design it so the player computed the entities that propagated outwards from the entity it had an effect on, the player would be computing every entity available shortly after.