3rd person motion tracking

Hello!

I am trying to create a simulation where the player would see a disembodied representation of himself. I am working off from the 3rd person blue print running the game in VR. By default, you control the character with WASD or the touchpads on the Vive controllers. However, I need the character to be controlled by the position of the headset (and eventually include arm movements using the controllers). I am unable to find the tracking of the headset in the event graph. Anybody got a clue?

As long as the Camera in your player has Lock to HMD checked, the position and rotation of the camera coincide with those of the HMD. In other words, to get the position and rotation of the HMD, just get the location and orientation of the camera.

Thanks for the reply. It does that Lock to HMD checked, but I still can’t get it to work. I’m fairly new to unreal, so I might be missing something very basic. I assume I would use GetWorldLocation of the camera and plug the value into a SetWorldLocation for the character (self).

What I didn’t say in the first post, is that I would like to have the camera locked in position (you would see the same image when you turn your head) and only using the motion tracking of the controllers and the headset to move the character around the otherwise static image. At this point, I don’t care whether it’s gonna make the player sick, I just need to try it.

Any idea how I would do this?

Ok but then you can unlock the camera from the HMD, use a Get Orientation and Position to get the orientation and position of the HMD, then apply them to your disembodied character. You will also need to offset it in the world somehow, otherwise its position will coincide with that of your VR camera and you may not see it.
Anyway such a setup it is very likely to trigger an oculus vestibular mismatch and therefore cybersickness.

Thank you. So I manged to add a mannequin that follows the movement of the camera. I added another camera which is disembodied and has Lock to HMD unchecked, This works as it should (still image when the head is turning), but now the mannequin stopped moving. How can I keep the motion sensors of that camera, while seeing through another?

Can you show your Blueprints?

I am working off the first person blueprint. Note the added “DisembodiedCam” which overrides the FirstPersonCamera.

I cannot double check it right now because I don’t have access to my dev system, but since none of your cameras is locked to the HMD, logically you cannot use the camera position to drive the movement of your skeletal mesh. If you want to get the current position of the HMD, you need to use the Get Orientation and Position node.

Nothing changes if I check or un check the FirstPersonCamera. I think it get’s deactivated because of the other camera.

Did you try, as pointed out earlier, to unlock both cameras from the HMD and use Get Orientation and Position to get the HMD tracking data, then apply them to the skeletal mesh?

Yes, but didn’t work. Found a fairly crude workaround: I changed the disembodied camera to a scenecapturecomponent2D. This renders a texture which is applied to a plane which sits in front of the camera. It works, but I expect it will be extremely expensive once the scene becomes more complex.

There are two issues with the plane: It catches light, so depending on where I stand in the scene, the brightness of the image changes. Also, the screen is visible inside the projection. Is there a way to make the plane not catch light or something like that? Maybe make it slightly emissive? And is there a way to make the plane invisible to the scenecapturecomponent2D?

For those interested, it is very hard to manoeuvre disembodied like this. The orthogonal camera removes the depths so it’s makes it even harder. So far I haven’t gotten ill, but I haven’t spent more than 5 minutes at a time in there. I think if I can get a rigged mannequin in there it will be easier to orient myself as I will be able to see how might my body is compared to everything around.