The best way to produce competent robots, say two UCLA researchers, ones that can do what we want, is to breed them on a robot farm.
Robots today are a long way from the future envisioned for them decades ago by fiction writers and engineers alike. Robot makers still spend much of their time trying to get their creations to walk across a room without crashing into furniture, or to tell a human being from a rock- -and not, for instance, worrying about the possibility that robots may one day rule the planet. Why has progress been so excruciatingly slow? One pair of researchers think they know: they say it’s because roboticists have been trying to write enormously complex robot software in the same instruction- by-instruction fashion that programmers employ to write, say, spreadsheet software. And these researchers propose a solution: robot farms, on which robots mate and have children, allowing evolution to hone their skills in somewhat the same way natural selection honed ours.
Robot husbandry is the brainchild of David Jefferson and Chuck Taylor of UCLA. Neither one of them had worked with robots until the year before last. Jefferson is a computer scientist who develops simulations of evolution, in which thousands of generations of video-game-like organisms are born, fight, cooperate, mate, and die, all in the space of a few days. Taylor is a biologist who turned to computer simulations to predict how populations of real organisms, such as mosquitoes, evolve and grow. Robots are a way for both of them to test some of the concepts they’ve learned from their simulations. So far they’ve developed only a single robot, but they’re well into the planning stage for the farm, and they’re going after funding for the first herd of 20. Actually, I’d like to have tens of thousands, or hundreds of thousands of robots, says Jefferson. But nobody could afford that.
How will robots mate on Jefferson and Taylor’s dream farm? Whatever you’re picturing, forget it. Sex between two robots will be strictly a software affair: chunks of each of their programs will be randomly mixed and matched, creating new programs that are then used to control other robots (built by humans, not hatched) that serve as offspring. It will be a little passionless but still a fair imitation of real mating, in which genes from each of the parents are combined to come up with offspring that are similar--but not identical--to the parents.
Like human children, the offspring of Jefferson and Taylor’s robots may on occasion turn out better than their parents. In this case better means being more successful at gathering Ping-Pong balls and other tasks designed to evaluate the robots’ dexterity. To kick off evolution, the researchers will equip each of a herd of robots with various random combinations of instructions. As an oversimplified example, imagine the possible instructions that might be useful, in the right combination, for picking up a Ping-Pong ball in a robot’s mechanical pincers: walk toward the ball, back away from the ball, stand still, open pincers, and close pincers.
Initially, each robot in the group will be provided with a different random combination of these instructions--such as first close pincers, then walk toward the ball, then back away from the ball--most of which wouldn’t help the robot come close to achieving its goal. The poorly performing robots will all be killed, meaning their programs will be wiped out. But a few lucky robots, presumably, will end up with a combination of instructions that allows them at least to get within striking distance of the ball.
Two of these better robots will then be designated parents. They will mate: that is, one or two instructions from the program of one of these robots will be exchanged with one or two instructions from the other parent, and that new combination will be inserted into the microprocessor brain of one of the many less-fortunate robots that were killed off--a sort of robot reincarnation. The other killed-off robots will be reincarnated with different combinations of the parents’ programs, thus creating an entire new generation of robot programs. Since the parents’ instructions are recombined in a random way, many of these robot children will probably end up doing worse than their parents. But a few will probably do better, leading to a new set of parents for the third generation, which will then produce a fourth generation, and so on.
In practice, Jefferson and Taylor plan to run through dozens or hundreds of generations in a matter of days or hours, watching each generation just long enough to determine the best candidates for parents. With each successive generation, the robots should get more and more adept at their assigned task, as selective breeding does its work. Eventually we hope to get to a state where we can consider those robots to be a colony of pseudo-organisms, says Jefferson.
All this may seem like a roundabout way to develop robot software, but Jefferson and Taylor believe it may be the only way to develop software for real, humanlike robots--as opposed to the industrial machines that are already performing narrowly circumscribed tasks in factories today.
Robot software will be by far the most difficult and complex software anybody has ever built, says Jefferson. If you look at all the elements that can make software hard to develop, this has them all: robots operate in real time, their motions are complicated, their electromechanical systems differ from robot to robot, and you can’t predict the different conditions each will face. Not only that, but bad robot software can result in harm to both the robot and its environment.
The closest thing in complexity to what robot software will have to be like is the software that controls spacecraft, notes Jefferson, and that has been created by tens of thousands of programmers writing and debugging programs for decades. But spacecraft usually operate in groups of one. Jefferson and Taylor want their robots to learn to work cooperatively, marching together, lifting heavy objects, even constructing things.
What makes them think evolution-style randomness can handle this daunting chore any better than old-fashioned programming? It’s simply that nature did it that way. If you look at all the examples we have of complex cooperative behavior in the world, says Jefferson, you’ll note that none of them have been engineered; they’re all in the animal kingdom, which means they’ve all evolved.
Taylor has already started work on one simple robot. Built out of Legos, with a microchip brain, this toylike device is supposed to evolve the behavioral sophistication of a bacterium--one that tends to propel itself through a liquid in the direction of the greatest concentration of sugar. Instead of sugar, the robot’s photoelectric eye detects printed stripes underneath it, and the goal is to get the robot to move in the direction of the most closely packed stripes. At the moment Taylor is still working out mechanical bugs, so the robot itself hasn’t done any evolving yet. In the meantime, though, Taylor has allowed its software to evolve in the virtual world of a computer simulation. The initial program was a random collection of simple instructions that had the simulated robot moving without rhyme or reason. After each run, the program gets slightly scrambled; if the new program is any better at the task, it replaces the original one. After several hundred generations, the simulated robot’s behavior is still sub-bacterial, but it is beginning to show signs of following patterns.
That may not seem impressive--indeed, it would take only a few hours to handwrite a program that followed the stripes perfectly--but there is good reason to believe that Taylor’s approach will eventually lead to a robot bacterium that outperforms anything that could be custom-programmed. Evolution is a bootstrapping process: it’s difficult to start, but once it’s rolling it works great. Jefferson has noticed as much in computer simulations he’s done of ant populations, in which colonies that at first wander around randomly searching for bits of food eventually evolve the ability to organize their searches by laying scent trails--just as real ants do.
The hardest part is at the beginning of evolution, he explains. When you start with totally random behavior, only a small fraction of ant colonies ever accidentally start to gather any food at all. But once you get one generation that starts, they’re likely to go on to evolve an effective strategy. Starting evolution is even harder with a population of one, like a single robot bacterium; desirable traits are much more likely to emerge in reasonable time if there is a large population within which to mix and match characteristics.
In fact, Jefferson and Taylor are concerned that the 100 robots they’ve planned for their farm won’t climb the evolutionary tree quickly enough to make things interesting. To get things rolling, Jefferson may pad the robot population with computer simulations of several thousand additional robots. The simulations will speed the process, but the real robots will keep things honest. A simulation can’t accurately reflect the situation of real robots in a real, ‘dirty’ environment with dust, and cracks in the floor, and mechanical failures, says Jefferson.
The robot farm is a long way from reality. The most immediate obstacle is funding. Robots are expensive--about $7,000 each for the MIT- designed toy-tractor-like machines that Jefferson and Taylor have their eye on. The two are hopeful that the funding will come through, but even if they get the robots this year, Jefferson will face the task of writing a programming language, control programs, and other basic software before the robots can even start evolving.
But what he and Taylor see themselves doing with their first farm is laying the groundwork for a great leap forward in robotics. It is easy to imagine the potential applications of smart robots--they might explore Mars, probe toxic waste dumps, or serve as self-driving vehicles on automated roadways, to name a few examples--but it is much harder to see how the software to control such wunderbeasts will ever be written line by line. It will be faster and cheaper to generate the software by evolution, insists Jefferson.
On the other hand, he acknowledges, the beauty of the evolutionary approach--that it emerges on its own, without anyone’s having to analyze all the complexities--is also a potential stumbling block to its acceptance, especially in applications where human life is at stake. The software may function perfectly well, but we won’t really understand how it works, says Jefferson. Some people might find robots like that a little too human for comfort.