Facial tracking

On the one hand you say that kinect will be soon abandoned by most developers, on the other hand you say that the main concern is about the hardware itself, which is going to be bloody expensive.
Everyone wants to buy a Ferrari for $300 and share it with friends.
Why kinect should be abandoned if it’s price is acceptable and it gives much more precise data like depth map than usual web cam? Only except for the frame rate in case of gopro and other high fps cameras which still need to be worked.

However the specs is very impressive and frame rate is very suitable for such specific task.

Kinect xbox 360
Field of View: 57.5˚ horizontal by 43.5˚ vertical
Resolvable Depth: 0.8 m -> 4.0 m
Colour Stream: 640 x 480 x 24 bpp 4:3 RGB @ 30fps640 x 480 x 16 bpp 4:3 YUV @ 15fps
Depth Stream: 320 x 240 16 bpp, 13-bit depth
Infrared Stream: No IR stream
Registration: Color <-> depth
Audio Capture: 4-mic array returning 48 Hz audio
Data Path: USB 2.0
Latency: ~90 ms with processing

Kinect xbox one
Field of View: 70˚ horizontal by 60˚ vertical
Resolvable Depth: 0.8 m -> 4.0 m
Colour Stream: 1920 x 1080 x 16 bpp 16:9 YUY2 @ 30 fps
Depth Stream: 512 x 424 x 16 bpp, 13-bit depth
Infrared Stream: 512 x 424, 11-bit dynamic range
Registration: Colour <-> depth and active IR
Audio Capture: 4-mic array returning 48K Hz audio
Data Path: USB 3.0
Latency: ~60 ms with processing !!!

Now I’m trying to understand how to get data from kinect sdk and apply it to 3d model using existing code examples, but it is quite difficult and I think that Playmaker would not be so helpful in this case.