Now that we can navigate a computer screen through gestures, could a three-dimensional interface that recognizes gestures be far behind? According to Jinha Lee, who created a 3D desktop when he interned at Microsoft’s Applied Sciences Group, the future could be closer than you think.

Lee, a Massachusetts Institute of Technology grad student, showed off his SpaceTop 3D desktop interface at this week’s TED conference in Long Beach, California, as first reported by Wired.

Powered by a transparent LED display and a system of two cameras – with one tracking the user’s gestures and the other watching their eyes to automatically adjust the projection – the SpaceTop 3D interface makes it possible to use your hands to interact with 3D graphic,s like documents and webpages, as if they were physical objects.

Unlike using the Xbox Kinect’s stereoscopic cameras to navigate your video game console from your couch, the SpaceTop interface is designed for you to literally reach your hands under the computer screen to manipulate the 3D projections.

As you can see in the demo below, the ability to manipulate digital objects in 3D makes a lot of sense for engineering and architecture where you can rotate objects with your hands rather than with your computer mouse. Although the interaction is facilitated by a computer and requires you to learn specific gestures to control the SpaceTop interface, it seems like a more intuitive way to use a computer.

There are no plans to bring SpaceTop to market just yet, though we hope a company out there will see the benefit of introducing a more efficient and natural way of using a computer. “It shouldn’t be in the hands of scientists, it should be in the hands of normal people,” Lee said. After all, the technology to bring this type of 3D environment to life clearly exists, so we hope our next computer will have a 3D home screen rather than live tiles.

 

Related Posts

New study shows AI isn’t ready for office work

A reality check for the "replacement" theory

Google Research suggests AI models like DeepSeek exhibit collective intelligence patterns

The paper, published on arXiv with the evocative title Reasoning Models Generate Societies of Thought, posits that these models don't merely compute; they implicitly simulate a "multi-agent" interaction. Imagine a boardroom full of experts tossing ideas around, challenging each other's assumptions, and looking at a problem from different angles before finally agreeing on the best answer. That is essentially what is happening inside the code. The researchers found that these models exhibit "perspective diversity," meaning they generate conflicting viewpoints and work to resolve them internally, much like a team of colleagues debating a strategy to find the best path forward.

Microsoft tells you to uninstall the latest Windows 11 update

https://twitter.com/hapico0109/status/2013480169840001437?s=20