How long before gestures move off the screen surface? This technology got off to an exciting start but it quickly hit a wall. It was disappointing, for example, when Microsoft had to unbundle the Kinect.
I've done a lot of full 3D gestures work. They work well, but people are finding it hard to come up with workable interactions to care about/spend battery on.
Google's new stuff had an opportunity to move gestures off the screen, mostly because it is low power/localized rather than high power/installations/invasive.
The Kinect as-is really had no future. It was too limited for gaming and too clunky for anything else. I do think the tech is great - if not amazing, but only if its part of a larger pie. MS's Hololens with a Kinect reading your body/hands/face/fingers makes a lot of sense to me. AR that's aware of your every move can be big, perhaps even a game changer in some/many industries.
At my desktop, I really don't need anything like Kinect. I can type on a keyboard or use a mouse. Or swipe on the nearby screen. Gesturing is clunky in those scenarios. Now put an AR headset on my face and set me loose. Now I need some kind of remote input that I don't carry around. That's something AR needs and MS has a major lead here.
When the basic essential interactions like "select" feel natural and unambiguous -- holding your hand in place for Kinect wasn't a good replacement for "click"/"tap".
Microsoft's Kinect - 2010
Leap Motion - 2010
Intel Real Sense - http://www.intel.com/content/www/us/en/architecture-and-tech...
Google's Soli chip - 2016 - http://www.youtube.com/watch?v=0QNiZfSsPc0
Google's Touch Sensitive Fabric: http://www.wired.com/2015/05/google-atap-project-soli-gestur...