Forget the Leap Motion sensor from Kickstarter. Soon, you’ll be able to use the same gestures that you already know from using your Xbox’s Kinect camera to control your computer, assuming Microsoft has its way.
Microsoft has been showing off some Kinect-embedded concept devices at its newly opened Envisioning Center during its annual TechForum, where the Redmond-based company demos its latest research projects to tech journalists at its Seattle campus. Judging by the video tour of the Envisioning Center, it is clear Microsoft believes the future involves large displays that fill up walls with built-in stereoscopic 3D Kinect cameras that you can control with your voice and gestures (in addition to touch controls via touchscreens).
The Verge asked Microsoft’s Craig Mundie how the company plans to make use of its Kinect technology, and Mundie admitted that the goal is to shrink the sensor and reduce its cost so it can be integrated into more products beyond the video game console. Displays and laptops embedded with a thinner and smaller version of the Kinect 3D cameras are definitely in the works. In fact, “[Mundie’s] dream is to get a Kinect into the bezel of something like this [Surface tablet].” Considering that Microsoft researchers and engineers are already able to show working models of Kinect-embedded devices, we may not have to wait long for these machines to reach store shelves.
Of course, anyone who has used the Kinect on an Xbox knows that the technology is not perfect. For example, the current generation of the Kinect sensor requires quite a bit of physical space before it can accurately track your motions, but that’s a not a luxury that a tablet or laptop user has when using mobile devices. Microsoft definitely has its work cut out to integrate the Kinect technology into every device it can, but at least it is working on it. We can’t wait to see a Kinect laptop or smartphone hit the market.
Related Posts
New study shows AI isn’t ready for office work
A reality check for the "replacement" theory
Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns
The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.
Microsoft tells you to uninstall the latest Windows 11 update
https://twitter.com/hapico0109/status/2013480169840001437?s=20