Gameplay Ability System and VR

While the Gameplay Ability System (GAS) and the Action RPG sample project is a compelling way to architect and organize game play logic in a decoupled and encapsulated way – I’m wondering about how people might have / could use it in multiplayer VR games.

Typically multiuser VR games might allow multiple users to interact with the same object – for example one user handing a second user an object or tool – GAS seems to require a player to be the owner of the tools for purposes of local client performance and server prediction… so how should that change when one user hands the tool to another user?

Is there any standard or expected way to use GAS with multiplayer VR setup that would allow for objects to be passed between two users as well as interacting with other objects that are not necessarily “in the possession” of a player… for example a piece of equipment they interact with like a lever on the wall that opens to a door, or security keypad they push buttons on? An multiuser VR escape room type of game could be a tangible example of a space where multiple items need to be used by various users.

In the case of a larger machine in the middle of the room with a bunch of levers and buttons – it wouldn’t make sense for the machine in the shared space to be owned by a player, but you might want the player to witness a snappy response to button presses, dial turns, and lever pulls with the fluidity of a local client, instead of a server view of the lever pull or button push animation that might be slower, or potentially a bit choppy coming over the network as server authoritative.

1 Like

bump, any more info on this?

I’m back, haven’t found any info, but I’m of the opinion this type of system or something similar will work well for designing VR / NonVR cross play games since all items can be shared by the different play types.

Since I’ve found no info on it, I’ll be setting it up on my own.
I found a good GAS inventory that will likely be my starting for testing what the best way to assemble and interact with objects would be.

My two current guesses are thus:

  • each interactable part of an item needs to be a self-contained component or grouping of logic, stored as a fragment, that gets attached together… somewhere
  • fragments can contain blueprint classes to spawn (ex: a gun, sword, etc) that still grant abilities back to the player, but more in a pattern like this. Grab thing, grant ability, listen for conditions, call ability off message bus subsystem? Let go is cancel or whatever scenario case.

Anyone with tips please let me know.