Gesture-sensing technology rivals keyboards and mice
Interfaces change, processors come and go, but the keyboard and its trusty sidekick the mouse have been part of the PC for at least 30 years. They may now be about to get stern competition, thanks to two gesture-sensing technologies set to drastically reduce the amount of typing and clicking needed to control the average computer.
By tracking hand movements precisely, the wrist-mounted prototype of the Digits project, built by a team from Microsoft Research in Cambridge, UK, allows gestures to be communicated in real time to any connected device.
An array of LEDs mounted on a plastic wrist brace facing the palm bounce infrared light off the user’s fingers. A laser shines across the hand to highlight the orientation of the fingers. A camera then reads the reflections, and software builds a model of the moving hand that is accurate to within one hundredth of a centimetre.
Project leader David Kim says that Digits was born of the desire for a technology more accurate than the company’s Xbox Kinect gaming sensor. The aim was to track movement without tying the user to any particular device. “We had to use technologies that are small and use less power,” he says. “It shouldn’t interfere with daily activity, and we wanted to enable continuous interaction.”
All in the wrist
The device is about the size of two ping pong balls taped together, and currently needs to be tethered to a laptop computer. But Kim plans to shrink it to the size of a wristwatch and make it wireless. In a demonstration at a symposium on user interface software and technology in Cambridge, Massachusetts, the system was shown controlling video games, smartphones and computers.
The Digits system isn’t the first such device. The Leap Motion sensor, from a San Francisco-based company of the same name, sits on a desktop reading a number of different gestures as users wave their hands overhead. The company has not yet released details on how the sensor works.
Digits is a “really nice piece of work”, says Thad Starner at the Georgia Institute of Technology in Atlanta, who is also technical lead on Google’s Project Glass.
Digits is in its early stages, says Starner, who has been using a wearable computer for almost 20 years. Nonetheless, he is excited at the potential for pairing sensitive, precise control interfaces with heads-up displays like Google Glass – which looks like a pair of glasses without lenses, and allows users to see data without needing to turn their head – or his own bespoke rig.
Symbiosis of man and machine
“You can imagine using really subtle gestures,” he says. “I’d use it in class to pull up notes while I’m teaching.” Starner’s own device feeds information to a display in front of his left eye. During a phone interview with New Scientist, speech-recognition software listening in on the conversation pulled up emails he had exchanged with the magazine in the past. Later, the system pushed a student’s thesis on rapid interactions with electronic devices into his field of view, deeming it relevant to the discussion.
Starner says the real power of Digits will be in continuous recognition – the ability to not only identify standalone commands, such as pressing your thumb and index finger together to skip a track on your iPod – but to interpret hand movements in sequence. Some of his current work involves teaching American Sign Language to children with hearing difficulties using a video game. “If we had finger-tracking wristwatches they could put on and play the game, we could look at how their fingers move through time, and give them feedback,” he says. “That would be really beneficial.”
Adding wearable computing to the arsenal of human-computer interfaces represents “a symbiosis of man and machine that we haven’t seen before”, Starner says. “Having access to data on a split-second basis makes you more powerful, more in control of your life. This is going to get us to the stage where we use systems without thinking.”
Syndicated Content: Hal Hodson, New Scientist