Hello, I’m currently exploring redirected/extended walking in VR for my thesis and therefore I’ve been looking for a way to intercept/modify the input that HMDs give to Unreal. The problem that I have been facing is that I can’t find a clear connection as to where motion input, so rotation and position, from the XR device is applied through the controller onto the camera.
So far I’ve tried several approaches, but oftentimes it seems methods that can be used for non-XR are either skipped, or simply not applicable in the same sense. For example, CameraModifiers, ControllerInput, CameraManagers etc.
In the IHeadMountedDisplay (OpenXR) library I have been looking at the FrameStats and Poses, which lead me to DeviceLocation, however I have not found where this might tie into the Controller/Camera. Messing with the DeviceLocations did not offer much help either as it caused some weird results.
Question in short:
Where does Unreal apply rotation and location updates generated by a VR headset? I am not looking for input like button presses and such, but the actual rotation and location updates from the HMD.
The Goal:
Be able to apply a gain on the current DeltaRot (or DeltaPos). Example: A 60 degree turn in the real world will be translated into a 70 degree turn in the game.
Notes:
From looking around previously it seems that changing how Unreal deals with HMD input is not a recommended thing to do, however I am very sure that that is what I want to do. The thesis is based around traditional locomotion (walking) as a way of moving in the game where the users movement is altered on-the-fly to redirect them away from their playspace boundaries. Since I am using walking as the only form of locomotion I am not interested in alternatives for this, so any joystick/teleporting or similar methods do not apply to my case.
My current solution simply rotates the camera on the “Tick”, but I would like to somehow transform the rotation and position in a much earlier stage.
The camera gets updated by the camera component calling FDefaultXRCamera::UpdatePlayerCamera() which calls IXRTrackingSystem::GetCurrentPose() in the current XR plugin. This happens during the tick of the player controller.
It will also do a late update in the render thread to directly update the view matrix with the latest poses. This also calls GetCurrentPose() but doesn’t touch the camera component.
I still think it’s a better idea to parent the camera component to a scene component and modify the pose of that, rather than modifying the camera or the poses inside the XR plugin themselves. All the poses in the XR plugin are mapped relative to the room, and changing them directly will cause a lot of issues.
What are the issues with doing it on the tick? The poses from the XR system should already be up to date by then.
Thank you for the reply, I will look into this tomorrow, but it seems promising.
I thought I had gone through most of the CameraComponent and similar, but this might be the missing link.
There really are not major issues of doing updates on the tick, I was mostly looking for a better alternative to rotating the camera without having to transform my camera rotation relative to the pawn it is attached to. It seems something similar is happening in the UpdatePlayerCamera call, but perhaps I can sneak in some code there to simplify my current solution.
Anyways thank you for a quick and consise reply. (I’ll mark it as a solution based on my testing tomorrow)
I am sorry for taking so long. I’ve followed what you said, and it is correct. I must have skipped over this when searching for myself.
I have implemented a simple modifier for the VR input to where I can dynamically scale it with a multiplier, or add a static offset. I did this by overrriding the OpenXR plugin and injecting a call to a nullable custom object, so before GetPose returns it will send the results to this object which can then modify it as it wishes.
To make thing easier I’ve also made a Room Scale VR character that locks the HMD to the pawn, and moves the pawn instead of changing the camera offset.
Great to hear you got a result. fascinating thesis too. RDW!
I am doing my PhD research on room-scale real walking so would love to pick your brain about some aspects of VR dev which I am sure we have in common. As an example I am trying to create a digital twin of my lab environment (which I have in done in Blender), and sync this exactly with the VE, so that one can reach out and touch a wall in the VE and get the real haptic response from the actual wall in the RE.