Camera-based object recognition aids navigation for the blind

Camera-based object recognition could help blind users navigate their environment, according to researchers at MIT, who have created an automatic body-worn system.

MIT blind navigation

Images are gathered by a chest-worn 3D camera, while the user gets navigation information via a belt with five vibrational motors spaced across the front, and object information from an electro-mechanical Braille interface.

“We did a couple of different tests with blind users,” said MIT mechanical engineer Robert Katzschmann. “Having something that didn’t infringe on their other senses was important. So we didn’t want to have audio; we didn’t want to have something around the head, vibrations on the neck – all of those things, we tried them out, but none of them were accepted. We found that the one area of the body that is the least used for other senses is around your abdomen.”

Key to the system are proprietary algorithms that quickly identify surfaces and their orientations from the 3D camera, which delivers 640 x 480 images with both colour and depth measurements for each pixel.

The algorithm first groups the pixels into clusters of three.

Because the pixels have associated location data, said MIT, each cluster determines a plane. If the orientations of the planes defined by five nearby clusters are within 10deg of each other, the system concludes that it has found a surface.

It doesn’t need to determine the extent of the surface or what type of object it’s the surface of, it simply registers an obstacle at that location and begins to buzz the associated motor if the wearer gets within 2m of it.

Chairs and tables

Chair identification is similar to surface identification, but more stringent: the system needs to complete three distinct surface identifications, in the same general area, rather than just one, to determine if the chair is unoccupied. The surfaces need to be roughly parallel to the ground, and they have to fall within a prescribed range of heights.

The belt motors vary in frequency, intensity, and duration of vibration, as well as inter-vibration interval.

“For instance, an increase in frequency and intensity generally indicates that the wearer is approaching an obstacle in the direction indicated by that particular motor,” said MIT. “But when the system is in chair-finding mode, for example, a double pulse indicates the direction in which a chair with a vacant seat can be found.”

The Braille interface consists of two rows of five Braille pads.

Symbols displayed on the pads describe the objects in the local environment: ‘t’ indicates table and ‘c’ indicates chair, for example. The symbol’s position in the row indicates the direction in which it can be found, and the column indicates its distance.

“A user adept at Braille should find that the signals from the Braille interface and the belt-mounted motors coincide,” said MIT. “In tests, the chair-finding system reduced subjects’ contacts with objects other than the chairs they sought by 80%, and the navigation system reduced the number of cane collisions with people loitering around a hallway by 86%.”

The system is intended to be used with or without a white cane, and will be described this week at the International Conference on Robotics and Automation.

There is a video of it in action.

MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) did the work, with the University of Modena and Reggio Emilia in Italy.


Leave a Reply

Your email address will not be published. Required fields are marked *

*