Galen Gruman
Executive Editor for Global Content

3D gestures: Coming to a screen near you

analysis
Apr 5, 20134 mins

Videos show how touchscreens may soon morph, given the rapid developments in spatial gesture technology

Two’s a coincidence, three’s a trend. There’s truth to that old reporter’s expression. In this case, there are three examples of a new kind of gesture technology coming to a mobile device or computer near you. Apple’s iPhone introduced the world to gesture-based computing, changing how we interacted with first smartphones, then all sorts of screens, including point-of-sales terminals and ATMs. (Yes, the technology existed previously, but the iPhone made it common.) When you see a screen in front of you, your instinct is now to touch it, right?

Touchscreens are in for a change; they’re about to leave their 2D Flatland behind and support more complex gestures in three dimensions. The Microsoft Kinect gaming console’s motion-detection capability kicked off the spatial gestures trend two years ago, but it’s not suited to computer gestures’ fine movements.

[ Hands-on with the Leap 3D gesture device. | Subscribe to InfoWorld’s Consumerization of IT newsletter today. ]

You’ve probably heard of Samsung’s infrared-based touchless gesture support in the forthcoming Samsung Galaxy S 4. It uses IR beams to detect where your hands are when over — but not on — the screen, to detect gross movements like waving your hand. IR is a low-resolution technology, which is why the Galaxy S 4’s touchless gestures are fairly primitive. You can see the technology in action here (go to the 30-second mark):

Both Sony and Huawei have announced camera-based 3D gesture detection; Sony has a limited version in its Xperia Sola smartphone, but Huawei hasn’t yet brought the technology to market on its smartphones, citing the need for more power graphical processors and two front-facing cameras.

Leap Motion’s $80 Leap controller — due to ship this spring — is more sophisticated. The small box plugs into a Windows PC’s or Mac’s USB port, then uses two cameras and three IR sensors in concert to detect movement of your fingers or other objects in the “airspace” above it. The technology could be miniaturized for use in smartphones and tablets, though that’s not likely to happen any time soon given the hardware burden it would impose. But it’s no stretch to see the technology incorporated into desktop and laptop monitors or perhaps even in keyboards.

The software may need some refinement; one early tester I know said it works great until you want to do something; there’s (so far) no gesture equivalent to pressing Enter or clicking the mouse button to tell the computer “do it.” You can see the technology at work here:

All these approaches require extra hardware and consume power. Researchers at the Korea Advanced Institute of Science and Technology think they have a technology that requires neither, making it well suited for smartphones and tablets. The MagGetz project uses the magnetometer built into most smartphones (allowing for the compass capability, for example) to detect the fields of other magnets. Software employs those magnetic perturbations to detect the other magnets’ locations. Inventor Sung Jae Hwang tells me this means gestures can occur anywhere around the device, not just in a confined area above a screen, as in the case of the Galaxy S 4, or above a detector, as in the case of the Leap.

But the MagGetz approach requires you to have some sort of magnetic pointers to interact with the magnetometer, so you can’t use hand or finger gestures with it. That suggests the technology will be more fitting for interaction with pen-supporting devices (like Samsung’s Note tablet and smartphone) or glove-based interactions (as in medical uses). You can see the technology in use here:

Although spatial gesture technology’s roots are decades old, often derived from robotic vision systems, we’re witnessing it crossing into the consumer electronics world as the underlying technologies have become miniaturized, processing capabilities have accelerated, and user familiarity increases interest in developing it further. That’s the same trajectory we saw for voice-recognition technology, which also has been promised for decades but only recently (think Siri) has begun to be workable in common computing environments.

As with voice recognition, spatial gesture technology needs further refinement and will benefit from continued leaps in computing capability. But it’s coming.

This article, “3D gestures: Coming to a screen near you,” was originally published at InfoWorld.com. Read more of Galen Gruman’s Smart User blog. For the latest business technology news, follow InfoWorld.com on Twitter.