How to calibrate HMD to player height

I don’t know if anyone else has tried to do this yet, but I found that it’s important to calibrate the Vive to the height of the player. Since UE4 uses centimeters and because it is superior, I’ll use that for my units of measurement.

For the Vive, my camera has to be placed at the feet of the in-game character. When the player puts the camera on their head, they raise the camera and displace it by the appropriate Z value. Let’s say you have a character avatar which stands at 170cm and their eye height is at about 165cm. When players play your game, they will be of various heights! What if the player height doesn’t match the avatar height? I happen to be 170cm, which is coincidentally the height of my character. However, what if I was 200cm or 140cm in height? It’s not reasonable to create avatars which are taller and shorter, so what is better to do is to adjust the players eye height to the eye height of the in-game avatar. Also, keep in mind that you will want to measure player arm lengths for really good motion controller animation stuff!

Measuring the Player:
So, I instruct the player to stand straight up and hold their arms straight down at their sides. When they are ready to calibrate, they pull both triggers of each hand controller at the same time. Since the Vive measures the HMD height as a total displacement from the floor, you can effectively grab the players height in centimeters off of the device position Z value! I also know the exact position of the hand controllers as a displacement from the ground, so we can also measure the arm lengths. If you want to be extra precise with arm lengths, you can instruct the player to hold arms at side for one calibration test, then hold them straight up for a second calibration test, and then hold them out in a T-pose for last calibration test.

You may also want to grab a ratio of the player height to the avatar height so that you can correctly animate the avatar bones.

Compensating for height
After you get the players eye height, you have to get a delta against the characters eye height. For my own testing, I placed a stool in the middle of the room (at character location so I know where it is in VR! be safe guys!) and gained 30cm in height displacement. This is useful for testing. Let’s assume that I have a +30cm Z value from the character eye height. My characters default camera location offset from the characters origin [0,0,0] is set to [0,0,-90] (character feet). Don’t change this value directly. Within the character construction script, I detect whether we’re using a Vive or a Rift and set the offset values accordingly. Now, what if a player is 30cm taller? To get the camera to the avatars eye height, you want to subtract 30cm from the base height, so [0,0,-90] - [0,0,30] becomes [0,0,-120]. When the player stands up straight, they’ll move through the height discrepency to hit avatar eye height. You will also want to apply this same height delta to the Z value of your motion controllers positions!

Compensating for initial rotation
One other really important part which I missed initially is that you want to grab the starting rotation of the avatar at the game start. This is going to be a global rotation offset which you need to apply to the HMD so that the player head position and the character head position and rotation match exactly. If your player start rotation is [0,0,0], this isn’t necessary – but you can’t guarantee or assume level designers will know this is important and you don’t want to tell them they need to use zero rotation.

Blueprint Pseudo-code:

FVector HMDOffset (default: [0,0,0])
     If Vive: [0,0,-90]
     If Rift: [0,0,70]

float GetHMDHeight()
   return HMDDevice.Position.Z;

float GetAvatarEyeHeight()
   return 165;

FVector PlayerHeightDelta = [0,0,GetHMDHeight - GetAvatarHeight];

Camera->SetRelativeLocation(HMDOffset - PlayerHeightDelta);

//construction script
const FRotator HMDRotationOffset = GetActorRotation();

FVector GetHMDWorldPos()
RotateVector(HMDDevice.Position, HMDRotationOffset) + GetActorLocation() + HMDOffset;

bool IsUsingRift()
return GetRawSensorData.Temperature > 0;

FTransform GetMotionController(bool left)
FTransform returnVal;
 returnVal.Position = (LeftMotionControllerComponent->GetWorldLocation() + HMDOffset) - PlayerHeightDelta;
 returnVal.Rotation = LeftMotionControllerComponent->GetWorldRotation();
 returnVal.Position = (RightMotionControllerComponent->GetWorldLocation() + HMDOffset) - PlayerHeightDelta;
 returnVal.Rotation = RightMotionControllerComponent->GetWorldRotation();
return returnVal;

I hope this helps anyone speed up their head and hand tracking capabilities and to account for differences in player heights :slight_smile: This should be generalized enough that it’s capable of supporting Oculus Touch whenever it becomes available.

Looks pretty interesting and something good to use. Coming back here when I need it :wink:

Thanks OP

Forgive me my noob question. So please ELI5. Why do you need to calibrate? You have XYZ position of HMD, and two controllers. Isn’t it more reasonable to just threat them as 3 separate objects, where one is a in game camera and remaining two are presented as a pair of gloves or resemble shape of controllers? Why all this T-pose, rotation etc.

In true ELI5 fashion, you remember how when you were 5, how all of the counter tops were out of your reach? And to get your toothbrush or something, you’d have to go find a foot stool or chair to stand on? That’s because the house was designed for fully grown adults. Well, if you are the height of a 5 year old in real life, there’s no reason why you have to be the height of a 5 year old in virtual reality. Everything in virtual reality should be equally accessible to everyone regardless of their height in real life.

If you don’t calibrate and account for the players height, then a 5 year old in real life is going to have the experience of a 5 year old in virtual reality. That magical key you placed on the table to unlock the chest will be out of reach (and probably not visible).

On the other extreme, imagine that you are playing as a 10 meter giant in VR. In real life, your eye height might be at 1.5m, but in VR it needs to be at 9.7m because you’re a giant. If you assume that the player is always 1.5m in height and you boost the camera position exactly by (9.7-1.5) = 8.2m, but the player is really 1.4m, then the boosted eye height is going to be 10cm off and the player is going to be looking through the nostrils of the giant rather than the eyes of the giant. And if you’re a five year old, you’re looking through the giants neck. Now, if other characters try to make eye contact with you, it’s going to be looking kinda weird because they’re looking at something way above your head, or looking at something on your chin.

As you can see, it is a lot easier for the designer to design environments for a known character baseline and not worry about variations in players physical heights, shapes, sizes and limitations.

Thank you for your answer. Now I’m aware that sometimes in VR games it’s advisable to scale game objects proportionally to user’s height.

Do note that if you “artificially” move someone up or down to “compensate” for their length, it will feel like they are either floating above the ground or wading through knee deep snow. Be careful in what way you use it.

A good example where they use calibrated player height is HoverJunkers, there the player is asked to calibrate their length by looking into a few lights.
And then the game uses this variable to scale the player mesh visible for other players, so a “headshot” actually matches with the head of the real character.

Interesting thread, sub this so I can come back when I get my editor going properly.