Already good questions in here, but I figure I’ll throw in some feedback anyhow.
With VR, we’re seeing a greater demand for environments you can interact with. If we want to bring that into multiplayer, it means replicating a lot of non-baked motion. In the past that usually meant physics, but now we have all kinds of motion controllers to account for that may be even more unpredictable in addition to physics.
Many games just use canned animations and dice rolls rather than actual physical events… Paragon for example feels like you’re fighting large hitboxes rather than physical characters when compared to Robo Recall, I’m sure that makes it easier to replicate.
If we want to come close to replicating Robo Recall, we need better prediction tools.
There are going to be cases where you want to handle a lot of secondary motion or local prediction on the client side, but need a way to tell the client to start or stop doing that efficiently, sorting that out can become rather tedious if you want fine control. In the context of advanced character movement, perhaps you only want partial replication or to only replicate few poses based on certain physics events(using that lovely pose snapshotting tool). Perhaps only for specific end-effector bones; such as an initial collision position/force and a rest/blend-out position.
said he thought VR would end up being used for social interaction, and I consider throwing physics coffee cups and paper-wads at people social interaction.