How to build a robotic guide dog
It’s always great to hear about readers’ projects, and the details of Gadget Master work in progress, so thanks to Tom for recently sharing his work with us. It involves a school electronics project to build a prototype robotic guide dog, no less!
Check out details of Tom’s plan, and his outline specs for the work.
For processing, he writes on the project website, the system uses four BeagleBoards networked together with a netbook. The idea is to allow a computer to control each camera and process parts of the image before sending them over the network for stereoscopy. He says a 150A surge motor controller is driven by the netbook which locates the user and processes maps to determine safe areas.
The idea is that with the 3D processing, ‘the robot can process at around 12 fps; and without it, the user is located at the speed of the cameras (30fps)’. All code is written in C++ using the OpenCV libraries.
To put it all in context, Tom wrote to us:
I’ve just finished my A levels and my final project for electronics was a robotic guide dog. It works and fully emulates the features of a guide dog, which I fully researched. The information and video about it are available here: http://602e21.com/projects/guide/index.php
I’m emailing you because I think it fulfils a real need as a product in its own right (only a prototype at the moment) but also because of the method which I have used, as it is an original method which rivals hugely expensive LiDAR systems for millimetre accuracy at the cost of 4 cheap webcams.
The method is fast, has no false negatives and can be used to navigate all kinds of terrain by any kind of robot without error. I feel this is an important development for all areas of robotics. Details of the method are available on the same page.
Tom also hopes that hopes that the system could be useful for other applications requiring obstacle avoidance and local navigation.