Gameplay Ability System and VR

While the Gameplay Ability System (GAS) and the Action RPG sample project is a compelling way to architect and organize game play logic in a decoupled and encapsulated way – I’m wondering about how people might have / could use it in multiplayer VR games.

Typically multiuser VR games might allow multiple users to interact with the same object – for example one user handing a second user an object or tool – GAS seems to require a player to be the owner of the tools for purposes of local client performance and server prediction… so how should that change when one user hands the tool to another user?

Is there any standard or expected way to use GAS with multiplayer VR setup that would allow for objects to be passed between two users as well as interacting with other objects that are not necessarily “in the possession” of a player… for example a piece of equipment they interact with like a lever on the wall that opens to a door, or security keypad they push buttons on? An multiuser VR escape room type of game could be a tangible example of a space where multiple items need to be used by various users.

In the case of a larger machine in the middle of the room with a bunch of levers and buttons – it wouldn’t make sense for the machine in the shared space to be owned by a player, but you might want the player to witness a snappy response to button presses, dial turns, and lever pulls with the fluidity of a local client, instead of a server view of the lever pull or button push animation that might be slower, or potentially a bit choppy coming over the network as server authoritative.

1 Like

bump, any more info on this?