I’m basing my algorithm for deriving the torso orientation based off of normative human physical constraints (assuming my physical constraints are normative).
I found my physical limits on how far I can turn my head to the left and right without turning my torso. If my torso is oriented at 0 degrees, then my head can yaw between 90 and -90 degrees, relative to my torso orientation. This is a good helper for approximating torso rotation.
The second thing I look at is the position of the players arms relative to the HMD position. If you draw a line connecting one hand to the other, and then find the midpoint along that line, and then draw a line between this midpoint and the torso position (flattening the Z value), you get an additional direction vector. If the player moves both of their hands behind their back, we can detect this by running a dot product against the head orientation vector. If this dot product is negative, we know the hands are behind the back and we can flip this vector direction.
The third thing I look at is the orientation of the wrists themselves. Generally, we know the forward direction of the motion controllers is suggestive to the forward direction of the torso, particularly if only look at the yaw and invert the yaw if the motion controller is being held upside down.
So, generally, I combine all of these various data points to get an 80% approximation to the torso rotation. You can move your arms anywhere and turn your head anywhere, and generally, the derived torso will be approximately within 20 degrees of your own. It could be improved though, and that’s why I’m asking around to see if anyone else has figured it out. Here is some vague hand waving on how I think it could be improved:
**not implemented bonus: **This part can probably be improved a lot and I don’t do this yet: But, generally if you hold a hand controller straight forward, then pitch your wrist up and down, there are physical constraints on how far you can pitch your wrist. We also know the hand position relative to the HMD, so we know if the players hand is fully extended or not. If the hand is fully extended, then the elbow joint has no rotation and any pitch we have is influenced only by the wrist rotation and shoulder rotation. Somehow, there could be some fancy math done to get a pretty good approximation to the shoulder socket position, which is directly connected to the torso.
Also not done: We know the head doesn’t rotate more than 90 degrees from the torso rotation, but the arms also have rotational constraints. I suppose if we know the length of a players arm and use what we know about human anatomy and proportions, we can approximate the elbow position and derive that from the motion controller position and rotation. If our elbow position is accurate, we can get a pretty good idea on the shoulder position based on the measured arm length and head position. And if we can get a good shoulder position for the left and right shoulder, we can run a vector from the neck position to the left and right shoulder positions, run a cross product against the up vector and choose the cross product vectors based off of the HMD gaze vector.