What does one have to keep in mind when developing for both the Rift and the Vive? I essentially want to make the same app work the exact same way in both. Does Unreal handle the differences well? Let’s say I develop an app completely without motion controllers; would I potentially be able to run it on both the Rift and Vive? Or will development be altered slightly for each one?
For the most part, Unreal abstracts away any differences so you don’t have to worry about them. That being said, since Vive supports room-scale experiences while the Rift typically does not, you will encounter some design challenges (rather than technical challenges) that you’ll need to solve.
In my limited experience the only tricky technical part is creating a player pawn class that works well for both the Vive’s room-scale positioning model and Rift’s in-one-place model. With the Rift, the player’s head/camera will never be offset from the pawn’s relative 0, 0, 0 location by more than a little bit so you can essentially treat the HMD x, y location and the pawn location as if they were the same. But with the Vive, the player’s head/camera can be offset quite drastically from the 0, 0, 0 reference point of the pawn. Think of the pawn x, y, z for Vive as always being aligned to the physical center of the room, regardless of where the player is standing in the room. This means that adding locomotion mechanics like the popular “teleport” approach can be a little complex to implement because for Vive users they expect the teleport to move them to a position where their head will be in the location they teleported to, so you can’t simply move the whole character to that position - you have to compensate for the player’s current HMD position offset relative to the center of the room.