Greetings,
I am writing a research paper for a degree related to VR users with disabilities. I have decided as part of this research to create a tool or option to assist users with limited mobility to be able to get a better range of motion with a controller than they could in real life.
My initial thought was to have (think sensitivity settings for a mouse) a scalable sensitivity setting that could move a VR hand a large amount with just, lets say, a small movement of the wrist.
I have been attempting to use the relative location vectors between VROrigin and the motion controllers previous and current location to offset the relative location of the controller but any kind of offset breaks the sim. Using add relative location works better but the position of the hand seems completely different than expected.
I have also tried creating an origin at runtime with the hand and using that to get a vector to the hands position, multiplying it and adding it to the location but this also has unwanted results.
Perhaps I’m approaching this wrong?
I’m wondering if manipulating the tracking transforms directly instead of the motion controller components would yield better results but I don’t see any way of accessing them via blueprint.
Any help or insight on this would greatly be appreciated.
The pose of the Motion Controller Component relative to its parent is always updated by code, any changes to it will be overwritten. Instead, remove the visualizations from it, and use it as the true controller pose. Add a child component to it for the visible controller, and modify the local pose of that component instead.
To amplify motion in the room space, you’d first have to have some sort of system that calibrates the base pose. You could do it by having a button press that moves the child component to the parent controller location, and stores the controller location relative to the VROrigin into a variable. Then every frame you get the distance between the stored position and the current true controller pose, and set the local position of the child component to a fraction of that.
This is very similar to what I have attempted already, I have reworked the blueprint to reflect what you said but am still getting the same undesired results… this is what I have
I have not created the further multiply node for amplified motion just yet because the positioning is still not correct although I have tested the amplification and it works as expected. When starting with this setup and press R the visualized controller is around a foot away to my back right with no apparent reason for it
It is indeed parented and the plus has been removed. The positioning is now much more accurate thank you. However a problem persists… this only works while aligned to a forward axis, as soon as I turn to the left or right the direction of the controller doesn’t match the visual hand.
it would be better if the RestPos position would rotate around with the HMD but parenting RestPos to the camera or HMD static mesh achieves nothing. The hierarchy FYI
Oh right, since the visualization component is relative to the controller, you’ll have to account for that. The easiest way is probably to parent it directly to the VROrigin and adding the + node back.
Factoring in the user rotating in the room makes it trickier. You could probably do it by parenting the Rest Pos to the Camera, and getting and setting the world locations instead of the relative ones in the reset. Then you’d have to get the location of the Rest Pos relative to the VROrigin in the tick, which you can do with a Get Relative Transform node on the camera, and using a Transform Location node from it on the rest pose relative location.
Ok, so I have changed the the relative locations to world and parented the RestPos to camera successfully, the only hiccup at that point is a slight rotation of the hand when rotating the HMD but moving the hand while turned is working correctly apart from that.
However when I implement the node ‘set world location’ at the end of tick the hand disappears completely. I attempted to follow your instruction but perhaps I got this part wrong?
I meant using the transform in setting the visualization component. trying it out, I also had to transform the vector from room space to the local space of the motion controller component. This correctly moves the component for me.
The downside is any head movement will move the controller, although it still seems surprisingly controllable.
Ok let me build what you have… In the meantime could you explain the transform location node a little? I’ve never used this node and it has me a bit baffled!
From the description it converts, lets say, an objects local space to world space
the first input ‘T’ is the objects transform which contains a location to convert to world space. which you could easily get from ‘get world location’ so clearly the 2nd input ‘location’ is required for what?
does the function return a vector from the location of ‘T’ to the location of ‘location’? is that what its doing?
although your build is definitely a lot more accurate! nice work!
I recognise that the RestPos is rotating the same amount as the camera but it is not following the camera in an orbital sense, it’s locked to the designated world location
A transform is a mapping from one space to another. The Get Relative Transform node returns the mapping between the spaces of the object and its parent. In my example both nodes get the mapping between the VROrigin space and the components own local space.
In the first one, we have the location of the rest pos in the space of its parent, which is the camera. To get it to the same space as the controller pose, we Use the Transform Location node to take the Location input from the cameras local space to the VROrigins local space. The node takes rotation and scale into consideration, without them it would be the equivalent of adding the location vectors together.
The second one uses an Inverse Transform Direction node. It transforms the vector in the opposite direction, from the VROrigin local space to the controller local space. Note that it’s also a direction node instead of a location node, so it doesn’t affect the length if the input vector, just the direction.
So in my example we modify the vector like this:
Camera space → VROrigin space → subtract (destination - source) for a direction + distance vector in VROrigin space → Scale the previous distance → Direction + distance in Motion Controller Component space
I hope this makes any sense.
I also have the rest pos parented to the camera, so it keeps the camera offset until it is moved with the reset key.
Thanks for the explanation, makes sense now. As for the project I’m out of time. I have added a sensitivity slider to the menu widget and an input action to reset Restpos with the oculus A button. I will also add a brief description of the aim of the tool in the world as a render text actor and call it done. This should be enough to present the project as a ‘proof of concept’.
I had planned on amplifying the rotation as well as other things but this took way longer than expected, probably because I’m a novice and should have asked for help sooner!
There is so much that could be done with this and it surprises me that this kind of thing isn’t standard.
So Rectus I am going to set the issue as resolved and leave with a big THANK YOU! for all your help with it. Maybe sometime in the future we will see this fleshed out and in action.