VR Movement in Real Life to In-Game


I have a VR game that uses regular locomotion. The player is part of a character actor that can walk around and move. However I would like to have a camera parented to the character, but every time the camera moves it resets it’s position back to zero and moves the character instead. This would prevent the player from walking through walls and would allow them to do evasive maneuvers.

However I have absolutely no idea how to tell the character exactly how many units I want it to move. I am worried that any kind of teleport blueprint could allow the player to get stuck in walls, I want the character to move a specific amount and still have collisions. Any ideas on how to implement this?


You can attach the camera to the player head using sockets, so that the camera will always follow the player…to avoid re-orienting the camera I strongly suggest you to use a scene component, make the camera a child of the scene component, then drag the scene component under the character and attach it to the head socket.

You can also add a collision mesh to the head, so that the player won’t be able to put his head thru walls, but this can lead to sickness in VR, so my advice is to use a setup where the user is aware that his head is close to a wall, by using visual feedback which allows the player know that he’s approaching the wall on that specific side…something like when you’re being shot in a normal game and a red line/dot informs you that someone is shooting you from that direction

Frankly speaking, UE4 doesn’t provide a solid out-of-the-box solution for room-scale VR. It works ok-ish if your Pawn just has a pair of hands and you are fine with it being able to stick its head through walls (in view of avoiding any oculo-vestibular mismatch that may lead to cybersickness). This said, as soon as you need a fully animated body which moves in synch with the VR camera and more control over positions and collisions, you need to be ready to write your own code from scratch or consider using a plugin which is specifically designed with room-scale in mind (e.g.@mordentral).

Luckily enough Epic provides some examples you can look into to get started. Couch Knights (even though somehow outdated) is one of them. Or you can look into the source code of Mordentral’s plugin and learn how he is managing it.

The problem is, as long as you have the VR Camera and the VR body inside the same Actor BP, the VR Camera will always be a child of the root component (through the CameraOrigin Scene Component) so you cannot really have the root component follow the camera when you move IRL because the hierarchy works the other way around. A potential solution could be to completely decouple the VR body from the VR Pawn and keep them “manually” in synch.

An extra complication is given by late updates. The camera and the motion controllers will always get an “extra” update during the Render Thread and after the regular Tick, so unless you disable it and accept a bit more latency, your VR body will always lag a bit behind the camera movements.

You are not the first one to stumble upon this issue, read here for example:…camera-in-4-11

Some time ago @Slayemin has also looked into the very same problem and has proposed a solution which however requires modifying the Engine. You can read about it here:…on-controllers

Bottom line: IMHO there is no perfect (out-of-the-box) solution yet, definitely something that may benefit from some serious community discussion and joint work.

@Jonas_Molgaard You have also done a lot of work on this subject. Can you maybe share your experience with the Community? TIA.

Slayemins " is locking the camera in place and manually moving the pawn with HMD movements and then moving the controllers root component backwards by the HMDs relative location on tick. This has many downsides, one of which being that it breaks anything that relies on the relative position of the HMD (IE: Chaperone component) as well as being entirely unsuited to multiplayer unless you inject the HMD movement directly into the characters saved movements or velocity (pretty bad, you can then suffer rollbacks from direct HMD movement which feels terrible). It doesn’t need to be an engine modification, the function he is modifying is virtual and you can just subclass it and re-write it locally.

It doesn’t actually require a custom override of the camera component to achieve though, you can offset it by the location instead. Jonas has been using this entirely in blueprint with velocity injection based on current max move speed and no acceleration.

Most people like MuchCharles utilize two actors and sync them up, this works for the most part, however there are specific sections of the character movement that really need overhauls for VR as they behave undesirably (wall sliding for one). Also manual rotation gets more complicated in this scenario as you will have to update both actors. However this is likely the most accessible method.

For my plugin I specifically move the visual representation and physics thread representation of the root capsule to the HMDs relative location and run all character movement logic with the offset included so that it remains a single actor and doesn’t inject HMD movement directly into character movement (they are rewound and replayed in case of collisions). This has its own downsides in that it took significant (near total) customization of the character/character movement to achieve and that there are separate “world locations” for the character, the root and the VR location (although that retains Epics desired relative offset workflow). I also use the direct driving solution as an alternative option but have started suggesting that people not use that character if possible.

There is no perfect fix, for single player the direct character driving by HMD is likely fine, for multiplayer it gets more complicated. You are working with a small space in a large world problem with roomscale.

Yeah, well, i kinda do what Mordentral mention below. I work with a Character and a Pawn. The Pawn holds the camera, and the Character holds the player capsule + all the nice movement component stuff that are built in there. I keep things synced up on tick. I did a fairly detailed description of my setup on my channel using a physics capsule, which for lots of (hopefully) obvious reasons did not turn out too well in the end, but which basically serves as a foundation to this current method i use. I have not done ANY testing in MP, so not sure about what adjustments and quirks there might be with that, so can’t really comment on that.

I’m not sure if this really pertains to this thread, but from what I’ve read the three of you, @vr_marco , @mordentral, and @Jonas_Molgaard all have a very adept knowledge of the current VR frameworks. I’ve been following a few bits of documentation online and have put together a SkeletalMesh Pawn where the head and hands use Inverse Kinematics to track the HMD and Motion controllers to replicate movement in the SkM. What I would now like to produce is a replication of the players movements or locomotion in worldspace. Would I continue to use a form of IK to do so or would any of you have a better way. Any advice you can give on this would be greatly appreciated.