According to the Quick Start docs regarding the Hololens 1, “HoloLens sends eye tracking, gesture, voice, current device pose, and spatial mapping input to your PC, which then sends (or streams) rendered frames back to your HoloLens.” My questing is how to go about actually sending this input data from the Hololens 1 headset to UE4 editor in 4.23 blueprints? Or is this only a Hololens 2 only feature that we will have to wait to use once the devices are shipped?
I see “Windows Spatial Input Tap Gesture”, as well as the other Hololens 2 specific gestures, but none of this data appears to be sent from the Hololens 1 to UE4 besides position in world space.
Remoting doesn’t simulate the device. If HL1 doesn’t support eye tracking and hand tracking, then the remoting can’t sent those data from the device. To handle HL2 abilities while remoting, you need an HL2 device.
Thank you for your reply. So their is no way to get “Gaze / Air Tap” gestures from the HL1 because these gestures are not supported in 4.23 “Windows Spatial Input - Gestures”, and all BPs available for this type of input are meant for HL2 remoting?
So I’m developing for HL2 right now and am unable to use hand input while remoting from the UE4 editor. Head tracking works and things are rendered, but my hands are not tracked. I’ve searched everywhere I could and the document just says that these features are enabled by default. Any idea on how to get this to work?
Yes it works on the device and emulator perfectly. But my level is a bit graphic intensive and I’m thinking of using my PC to do the heavy lifting. Hence the streaming. I’m currently using the latest source code build of UE4. I’m actually using the Microsoft’s UXTools for Unreal. It does the hand tracking and simple gesture recognitions out of the box. So I haven’t really done the hand tracking myself.
I’ve also posted a question on AnswerHub about this here -