Loyalty, teamwork, cruel deception: welcome to robot evolution. Living things communicate all the time. They bark, they glow, they make a stink, they thwack the ground. How their communication evolved is the sort of big question that keeps lots of biologists busy for entire careers. One of the reasons it's so big is that there are many different things that organisms communicate. A frog may sing to attract mates. A plant may give off a chemical to attract parasitoid wasps to attack the bugs chewing its leaves. An ant may lay down pheromone trails to guide other ants to food. Bacteria emit chemical signals to each other so that they can build biofilms that line our lungs and guts. Communication may work all very well in these cases, but scientists also want to know how they evolved in the first place. Roughly speaking, their question goes something like this. Say you're an organism living a solitary life. Sending a signal to another member of your species may cost you more than it might bring back in benefits. If you come across some food and suddenly declare, "My, but those are some tasty grubs," you may find yourself besieged by other members of your species all coming to have some for themselves. You might even attract the attention of a predator and become a meal yourself. So why not just shut up? There are many ways to attack this question. You can go out and listen to birds. You can genetically engineer bacteria to tinker with their communication system and see what happens. Or you can build an army of robots. Laurent Keller, an expert on social evolution at the University of Lausanne in Switzerland, chose the latter. Working with robotics experts at Lausanne, he constructed simple robots like the ones shown above. Each robot had a pair of wheeled tracks, a 360-degree light-sensing camera, and an infrared sensor underneath. The robots were controlled by a program with a neural network architecture. In neural networks, inputs come in through various channels and get combined in various combinations, and the combinations then produce outgoing signals. In the case of the Swiss robots, the inputs were the signals from the camera and the infrared sensor, and the output was the control of the tracks. The scientists then put the robots in a little arena with two glowing red disks. One disk they called the food source. The other was the poison source. The only difference between them was that food source sat on top of a gray piece of paper, and the poison source sat on top of black paper. A robot could tell the difference between the two only once it was close enough to a source to use its infrared sensor to see the paper color. Then the scientists allowed the robots to evolve. The robots--a thousand of them in each trial of the experiment--started out with neural networks that were wired at random. They were placed in groups of ten in arenas with poison and food, and they all wandered in a haze. If a robot happened to reach the food and detected the gray paper, the scientists awarded it a point. If it ended up by the poison source, it lost a point. The scientists observed each robot over the course of ten minutes and added up all their points during that time. (This part of the experiment was run on a computer simulation to save time and to be able to evolve lots of robots at once.) In the simplest version of the experiment, the scientists selected the top 200 feeders. Not surprisingly, they were all pretty awful, since they had randomly wired neural networks. But they had promise. The scientists "bred" the robots by creating 100 pairs and using parts of each one's program to create a new one. Each new program also had a small chance of spontaneously changing in one part (how strongly it reacted to the red light, for example). After several rounds of this mating, the new programs were plugged back into robots, which then groped around again for food. And once again the scientists selected the fastest ones. They repeated this cycle 500 times in 20 different replicate lines. When they were done, they plugged the program into real robots and let them loose in a real arena with real food and poison (well, as real as food and poison get for experimental robots). The real robots behaved just like the simulated ones, demonstrating that the simulation had gotten the physics of the real robots right. The results were impressive, although perhaps not surprising to people who are familiar with experimental evolution with bacteria. From their randomly wired networks, the robots evolved within a few dozens generations until they were scoring about 160 points a trial. That held in all twenty lines. Each program consists of 240 bits, which means that it could take any of 2 to the 240th power configurations. Out of that unimaginable range of possibilities, the robots in each line found a fast solution. Now the scientists made things more interesting. There's a great deal of evidence to suggest that if individuals are closely related to one another, evolution may lead to less cut-throat competition and more cooperation. (See my post on slime molds for an example of this research.) So the scientists ran the robot evolution over again, but this time the robots got kin. Rather than mixing them indiscriminately, they grouped the robots into colonies. They only bred the best performers with other members of their colonies, and from their offspring they created robot clones for the next round of food and poison. Kinship had a big effect on the robots. Now they were scoring about 170 points. Part of their success was the result of politeness. The scientists designed the food source so that only eight out of ten robots could fit around it at once. The individualist robots jostled for access and ended up all getting fewer points. The robot families, on the other hand, worked together. There was no code of honor in their silicon heads, of course. It's just that they shared the same instructions. The scientists then added another wrinkle: they grouped the robots into colonies. There's evidence to suggest that in some species natural selection can act not just on the level of individuals, but on the level of colonies as well. So the scientists evolved the robots by selecting the best performing colonies, rather than plucking out individuals. And this colony-level selection boosted the robots' performance even more, scoring an average of 200 points. (A fine point: the scientists also ran the experiment with colony level selection on unrelated robots. They scored 120 points--good but not as good as the others.) Here, however, is where the experiment got really intriguing. Each robot wears a kind of belt that can glow, casting a blue light. The scientists now plugged the blue light into the robot circuitry. Its neural network could switch the light on and off, and it could detect blue light from other robots and change course accordingly. The scientists started the experiments all over again, with randomly wired robots that were either related or unrelated, and experienced selection as individuals or as colonies. At first the robots just flashed their lights at random. But over time things changed. In the trials with relatives undergoing colony selection, twelve out of the twenty lines began to turn on the blue light when they reached the food. The light attracted the other robots, bringing them quickly to the food. The other eight lines evolved the opposite strategy. They turned blue when they hit the poison, and the other robots responded to the light by heading away. Two separate communication systems had evolved, each benefiting the entire colony. By communicating, the robots also raised their score by 14%. Here's a movie showing six of these chit-chatting robots finding a meal. A similar robot language arose in two of the other trials (non-relatives with colony selection and relatives with individual selection), although in their cases it didn't give them as big a boost. A truly perverse language sprang up in the individually selection non-relatives. In all twenty trials, the robots tended to emit blue light when they were far away from the food. The other robots were attracted to them anyway, even if it meant they had to abandon their food. The scientists speculate that this deception evolved because the robots initially were turning blue at random. Since the only place where a lot of robots would tend to aggregate would be around the food, a strategy evolved to head for the blue light. But that strategy opened up the opportunity for robots to fool each other. If they switched on their blue light when they were away from the food, they would distract other robots, reducing the competition for access to the food. And without kinship to give them a common genetic destiny, the robots got better at fooling one another. In their individualistic scramble, they ended up performing disastrously. Unlike in the other versions of the experiments, the deceptive robots actually scored worse than they did without the chance to evolve communication. There are lessons both abstract and practical here. The rules that govern social organisms may apply to man-made machines as well. and if you want to avoid a robot uprising, don't let robots have kids and don't let them talk to each other.