I haven’t published a VR app myself, but I can say what I’ve seen in other apps.
For the Oculus Quest interface, they do a mix of both.
For the main menu, they want to make the player feel like they’re in a real space, so they don’t move the menu. This allows the player to explore looking around at the details of the 3D environment.
For their alerts and modal menus - like boundary configuration or power off menu - they pop a menu up in front of the player that generally moves - with a smoothing delay - to where the player is looking. These menus often feature fewer buttons, so the player is less likely to click somewhere incorrect even if the menu moves a little bit.
– All of these menus move if the player makes a big turn of their head so that the modal stays in front of the player’s face, but on a smoothed delay.
– Some of these menus shift position subtly if the player makes a small turn of the head, while others don’t move at all until it detects the player is looking away.
To me, it looks like the best practice is:
Try to give the player a sense-of-place as early as possible with your main menu by keeping it still and having a 3D environment the player can look around in - even if it’s abstract.
If you need to give the player an alert they must interact with - and don’t want them distracted by anything else - use a smoothing motion to move it to put it in front of their face if you detect them looking away.
There may be more guidance in best practices of the Oculus documentation.