I am currently trying to implement a shrinking effect for my VR Character (Oculus Rift). My player is normally in normal size and shall have the ability to shrink itself to a small, ant like size. So I tried using “Set World to Meters Scale” and it seemed to work, but I guess it has a limit in size, whenever I go below 0.1 I get rendering issues. So I thought about scaling the whole world by 10 and setting the default Meter scale to 1000. This works but now I have a problem with positional tracking when the player is shrinked.
My player is driving in a “cart” and here the problems begin. In normal state, everything is ok, also positional tracking of the HMD looks good. But after shrinking (setting World to Meters Scale to 0.1) positional tracking doesnt seem to work and also my “cart” doesnt scale corretly. I tried scaling it manually together with setting World to Meters Scale but still my player cant lean “outside” of it.
Just shrinking the Character doesnt seem to work, since then I get huge stereo rendering problems like images are too far apart so you instantly get headaches.
Is there a better way to shrink a VR player than using “Set World to Meters Scale”, which keeps positional tracking working and is able to shrink to very small sizes?
Ach that’s a bit sh**. I was hoping that the IPD would offer something close to a quick fix; however after going over the engine code in a bit more depth it looks like the implementation isn’t really meant to accommodate this.
It looks like what we’re really trying to do is change the HMD’s HMDToEyeOffset to scale the world correctly to the size of the character. This is driven in some aspects by the IPD; however it looks like the IPD is only used in development anyway:
// for debugging purposes only: overriding IPD
if( CurrentSettings->Flags.bOverrideIPD )
check( CurrentSettings->InterpupillaryDistance >= 0 );
CurrentSettings->EyeRenderDesc[ 0 ].HmdToEyeOffset.x = -CurrentSettings->InterpupillaryDistance * 0.5f;
CurrentSettings->EyeRenderDesc[ 1 ].HmdToEyeOffset.x = CurrentSettings->InterpupillaryDistance * 0.5f;
What’s more; I think what you’re seeing with regards to the headset not tracking is to do with the way that it calculates the rotation from the HMD input - although I’d have to debug to find exactly why:
const FQuat ViewOrient = ViewRotation.Quaternion();
const FQuat DeltaControlOrientation = ViewOrient * CurEyeOrient.Inverse();
this->LastPlayerRotation = DeltaControlOrientation;
Even further to all this, thinking about this in more depth, the height/room-scale tracking wouldn’t feedback correctly even if this was the case because if you simply made the character small (even if the eyes were correctly set up), you’d track though the ground if you crouched IRL.
So I OK - maybe admittedly far more complicated a challenge than I gave credit for. What I might try is a slightly different approach to your proposed WorldToMeters idea but instead consider an engine modification to the Oculus code. It’s admittedly less than desirable, and an outright sledgehammer approach, but I can’t see any way further other than this.
In principle, the WorldToMeters idea is sound but you don’t want to affect everything (world, objects, phys etc). I’d look at finding in all files in Engine\Plugins\Runtime\OculusRift for WorldToMeters, then go through each one and attempt to accommodate an additional scaling factor; if you managed to override the effective world-to-HMD eyes factor, along with applying a scaling factor for room scale displacement then this might keep everything in check - your world, its phys and rendering, and the HMD itself.
I know it does sound like a bit of a slog, but I can’t see any further “quick and dirty” solution that’s likely to offer any more resolution than a quick, buggy, proof of concept - you’ve already encountered rendering issues applying a new global WorldToMeters scale for one; additionally I’d expect the physics to behave completely differently for another.
I don’t think setting something so globally-affecting as the World to Metres scale would really yield any positive results; I’d really expect things to go wrong with the simulation itself. However it sounds like the character shrinking angle may be the most “pure” approach in simulating the scenario within the engine and I’d expect could therefore yield the best results.
The stereo artefacting: could this be because you need to adjust the inter-pupillary distance (IPD) of the player’s eyes to a reflective value for your shrunken character? If you’re 20cm high, but still have eyes that are 7cm apart, things would look weird.
I’d recommend playing around with SetInterpupillaryDistance on the HMD interface. Consider scaling it by the same ratio as you’ve scaled your character (so that if your (human) player has an HMD profile set up with a real-life IPD, this is reflected when shrunken.) A change to this setting should be required when “shrinking” as quite literally something that was close-by instead becomes relatively far away when you’re small; your eyes need to reflect that change.
I’d be really interested in how it goes.
Thank you for that tip, I will try changing the interpupillar distance too. I will tell you how it went
Well, the stereo artifacting is gone but positional tracking is still not working…also the shrinking effect is barely noticable, you are smaller but it doesnt “look like it”.
Hm thats indeed a sledgehammer method. But maybe changing this line
// correct position according to BaseOrientation and BaseOffset.
const FVector Pos = (ToFVector_M2U(OVR::Vector3f(InPose.Position), FinalWorldToMetersScale) - (Settings->BaseOffset * FinalWorldToMetersScale)) * CameraScale3D;
and remove the multiplication of the CameraScale3D might help with the position tracking?
Sorry man - I was on holiday. That’s exactly the bit of code that jumped out at me when going over the VR system. It would probably take a few hours to go through this, so I didn’t myself; however it looks like it could be the place to start if there’s no “quick fix” as we both tried.
Out of interest, did you get anywhere with this approach?