How To: Object Collision with Room Scale Locomotion in 4.11

bbdb2417a8de47915ecceacf4c28f6c9b758efb2.jpeg

e9d1677105093348cc26d60afdd3de67c042dd17.jpeg

One concern I’ve had with flat out blocking movement is the feeling the player will get. During some tests of mine with a Rift DK2, if I lock the camera position and do not move it as the player does, I get extreme nauseousness near instantaneously. I don’t have my Vive yet so I’m not quite sure how this feels in a walking room-scale environment, but I suspect there may be issues if you are walking and suddenly your camera stops.

I think overall the better idea for VR would be to black out/fade out the camera when an occlusion with an impassible object is detected with some kind of light fog indicating which way back to the correct play space.

I think some kind of haptic feedback on overlap would suffice?
This will cause nausea, when I had a DK2 and games would crash (which happened a lot) it made me go very dizzy and sick. Nonetheless, good job :slight_smile:

Yeah, I thought that blocking the camera from moving through walls would cause nausea, but in my testing with the Vive, it wasn’t actually an issue.

But, let’s pretend it actually does. So what? The designers objective is to prevent people from walking through walls and that is achieved, and if players feel a slightly unpleasant feeling for trying, they will become trained to avoid trying to walk through walls.

There was another interesting ‘issue’ I discovered during testing though. As you guys probably know, the Vive has the chaperon boundaries. These are physical walls in the real world which you should not try to walk through. When I had trained myself to walk through walls in VR, I started to accidentally get the virtual walls and the physical walls mixed up and I’d accidentally walk into real world walls, expecting to phase through them. Nope! The real world is causing my real world avatar and camera (eyes) from passing through the physical walls and I get an unpleasant bump for trying. If reality can punish me for it, then I shouldn’t feel too bad about punishing players in virtual reality.

Can you explain what you mean by “Within your character blueprint, you need to create a function called “Get head position” that returns a vector of the head position in world space. You can hard code in a return value”

Yeah, so when you use “GetActorLocation()” on a target actor, you’re getting the very center of that actor. For a character, this point is generally right at the hip level. This may change depending on the type of character you have though, so a horse would have a very different center position from a humanoid.

If you’re controlling a character within VR, you want to view that character from its head position, not its hip position or center position. This usually means you’ll have a vector which contains an offset / displacement value from the characters center position. So, for a humanoid which is standing, this would usually just be a vector with a Z value. For a horse, it could also have X/Y displacement values. For a crawling type of creature, likewise.

So, what you’re going to be doing is matching the HMD position to the characters head position. When a person moves their head position in the play area, you want to move the controlled actors head to that same position by offsetting the body. For example, let’s say I have a humanoid character which stands at 180cm. If the character is going to be standing on top of the position [0,0,0] in world space and you ask for “get actor location”, you’ll get [0,0,90]. For the sake of simplicity, let’s pretend that the player is also 180 cm. If the player is standing in the center of their room and you ask for the HMD position, you’ll get [0,0,~180]. Now, if you just set the controlled character location to the players HMD position, you’d be placing the hips of the character at the player eye position. So if your vector displacement for the character is [0,0,90], you would get the player HMD position and subtract this displacement value to get [0,0,90] and set the actor location to this value. Now, the players eye height and the characters eye height match.

I’m currently revising my design / code for this to update for 4.13, so the original post may be outdated.

is there a way you could do a small video tutorial or something demonstrating how you do this. I’m kind of new to this and I understand what your talking about I just don’t know how to do it with blueprints and I sadly can’t seem to find anything on the internet that explains this as well as you do.

With your method I’m hoping to have a full IK body as well

If I did make a video, it would take a long time to produce. I work a day job at oculus right now so I don’t have much time for anything else :frowning: I will have to revisit this feature this month using 4.24 to see if the workflow is still the same. I’ll update this post a bit latest with more setup details when I get around to it.