Could robots evolve autonomously? The HyperNEAT project at CornellUniversity has put neural-net-based brains inside robotic bodies and programmed the brains to take sensory inputs from the body and use them to figure out how to control the robotic body. Some brains taught themselves how to do it, others didn’t.
The brains were put in different bodies. Some brains learnt to control the body, others failed.
The best-performing brains were replicated and put into the next generation of robotic body.
Eventually the Cornell team produced a robotic body, controlled by a brain which could get it to walk around the lab.
“It looks like a robot that ‘wakes up’, tries out a new gait, and then ‘thinks about it’ for a few seconds, before waking up again and trying a new gait,” says the project leader Jeffrey Clune, “over time you see that the robot learns how to walk better and better.”
A brain transferred to a different type of body – say from two legs to four, can adapt to learn new techniques for control.
Cornell has 3D printed many of the essential components of the robotic bodies like muscles, bones, batteries, wires and computers.
“Eventually, the entire thing will be printed, brains and all,” says Clune, “the end game is to evolve robots in simulation, hit print, and watch them walk out of a 3D printer.”