Nice guys do not always finish last. In fact, they sometimes finish first. And now we have scientific evidence to prove it, thanks to the work of two Austrian mathematicians who have discovered the value of forgiveness. Or at least they’ve discovered how forgiveness might have come into being in our dog-eat-dog world.
Generosity pays off under conditions of uncertainty. You should not be too tolerant, but not too intolerant, either, says Karl Sigmund, a 47-year-old mathematician at the University of Vienna. Never forget a good turn, but try occasionally to forgive a bad one. We benefit from cultivating a keen sense of gratitude dosed with a small amount of generosity.
The American embassy, bristling with listening posts, lies outside Sigmund’s office window. Just over the border, warring Serbs and Croats are killing each other to the south, while the former Soviet empire is imploding to the east. No wonder the evolution of cooperation is a hot subject in Vienna, where this past year Sigmund and his former graduate student Martin Nowak codiscovered what one might call the Viennese Golden Mean.
In spite of its happy ending, this story about how nice guys came into existence and survived is fraught with near misses and harrowing escapes. And it turns out that nice guys can’t do it on their own. They need help along the way from some not-so-nice guys, who will then disappear in a final apocalyptic outbreak of good feeling.
I by no means want to give the impression that this tendency toward generous cooperation is the usual thing, says Sigmund, who has wild shocks of hair standing up on top of his head, a bottle-brush mustache, and spectacles. As he maps out the limits of this new theory of forgiveness, he sits under an etching of Captain Nemo steering a submarine 20,000 leagues under the sea. It only works after the cooperators get help from stern retaliators. This is the basic message: To get cooperation you need a police force, but then the police die out. So it’s good to have police but not to be police! If all this sounds strange and confusing, welcome to the bizarre world of the mathematics of forgiveness.
Scientists have long been mystified as to why anyone would ever do something unselfish for someone else. These displays of niceness are called altruism, and they don’t seem to square with the Darwinian scheme of things. The point of the evolutionary game is to pass along your genes, and your best chance to do that usually means grabbing as much food and other resources for yourself and your progeny as you can. Animals with unselfish, generous impulses would seem ill-equipped to compete and likely candidates for a quick death. Darwin called it the survival of the fittest--not the nicest.
So why do vampire bats share their blood meals with unrelated, less fortunate neighbors? A vampire bat has to consume between 50 and 100 percent of its body weight in blood every night. It will die if it fails to feed for two nights in a row. But a bat on the edge of starvation can gain 12 hours of life and another chance to feed if it is given a regurgitated blood meal by a roost mate. Someone has actually worked out the odds on this vampire buddy system. If the bats didn’t practice food sharing, their annual mortality rate would be 82 percent. With food sharing, this rate drops to 24 percent.
Bats aren’t alone in their generosity. Food sharing among unrelated individuals is also practiced by wild chimpanzees and occasionally by human beings. Stickleback fish team up to inspect dangerous predators. Black hamlet fish, which have both male and female sex organs, take turns fertilizing each other’s eggs; one fish, after being fertilized, could selfishly cut and run, leaving its former partner in the lurch.
After the Second World War, the biological debate about why altruism exists got picked up by social scientists. If vampire bats could get their act together, wasn’t there hope for the Kremlin and the Pentagon? The most famous paradigm for modeling the evolution of cooperation among selfish individuals is a mathematical game called the prisoner’s dilemma. It works like this:
Imagine two prisoners facing life in the slammer for a crime they committed together. They are questioned separately by the authorities. If they resist the temptation to rat on each other, their alibi will hold and they’ll both be released after a few months in jail. (Let’s assign this outcome a value of 3 points each, with the object of the game being to score the most points.) If both prisoners chicken out and rat on each other, they’ll each get a longer sentence (albeit less than the maximum because they get time off for turning state’s witness); this lower payoff is worth 1 point each. But the highest payoff goes to the prisoner who rats while his buddy remains silent; then the ratter goes scot-free, for 5 points, while the silent sucker gets the maximum sentence, for 0 points.
In the simple prisoner’s dilemma, where the game is played only once and no future matches are envisioned, the rational choice about how to act is so clear that there is really no dilemma at all. If you’re nice, you risk your opponent’s playing you for a sucker; the only way to minimize the risk is to rat. You’ll get at least 1 point, and as many as 5 points if your opponent is a gullible fool. Sorry, nice guys. No grounds for hope yet.
But this model of life, which pictures the world as a dark alley full of strangers meeting only once in their lives, is not particularly realistic. It is far more likely that we keep meeting the same boyz in the hood. Mathematicians have taken account of this fact by iterating, or repeating, the prisoner’s dilemma, so that the same players face each other again and again.
When you play repeated rounds of the prisoner’s dilemma, the game is completely different. The most surprising fact is that the competition no longer has a single strategy that is better than all the others. The game becomes contingent, chancy. Everything depends on whom you are playing at any particular moment.
For the simple prisoner’s dilemma you have one best strategy, which is to defect, says Sigmund. But for the repeated prisoner’s dilemma there is no best strategy, and it is impossible that a single one could ever be found. You can always encounter situations where it’s better to switch strategies. It all depends on what kind of player you’re partnered with.
Consider the following example. If you meet a relentless defector, you should always defect. If you meet an all-out cooperator, you should also always defect. But if you meet a grim retaliator--someone who cooperates until his opponent defects and from that moment on never cooperates again--you should cooperate.
Two mad-dog defectors hammering each other without surcease will pick up only 1 point per round, while two cooperators, consistently scratching each other’s backs, will rack up a steady 3 points per round. We begin to see how cooperation can pay off, and in the long run, with the arbitrary values we have assigned to the game, 3 points is the highest average payoff anyone can expect to earn per round.
Mathematicians, game theorists, biologists, and arms negotiators had been battling one another for 30 years, arguing about which strategy was best for playing repeated rounds of the prisoner’s dilemma, when Robert Axelrod, a political scientist at the University of Michigan, decided to settle the matter with a computer tournament. Unlike humans, who get bored and sloppy when working at this level of detail, computers can play these strategies against each other ad nauseam, or at least until the plug gets pulled.
Researchers around the world mailed Axelrod 14 different computer programs. He added one of his own and played all of them against each other in a round-robin tournament in 1978. The winner was the simplest program, Tit for Tat. It came from Anatol Rapoport, a former concert pianist and one of the grand old men of game theory. Rapoport had political reasons for being interested in the prisoner’s dilemma, says Sigmund, who met him when Rapoport moved from Canada to Vienna in the early eighties. Rapoport was strongly committed to the peace movement, which may be one of the reasons he came to a neutral country like Austria.
Rapoport saw the arms race between the superpowers as one of the most spectacular examples of the prisoner’s dilemma. And a logical arms- race strategy, Rapoport reasoned, was Tit for Tat: cooperate in the first round and then imitate whatever the other player does.
As successful as it may be in some situations, Tit for Tat is not always the best strategy. On analyzing the results of his first tournament, Axelrod found that Tit for Tat could have been beaten by another strategy that no one had thought to enter: Tit for Two Tats. This more generous policy calls for retaliation only after two consecutive rounds of defection from the other player. Its forgiving nature elicited more cooperation from other players over the long run, so that even though Tit for Two Tats wouldn’t win any single encounter, overall it would rack up the most points. In contrast, stricter strategies such as Tit for Tat would be forced into battling retaliators, thereby getting a low number of points and falling behind Tit for Two Tats in scoring.
Axelrod held a second round-robin tournament in 1979. Tit for Two Tats was one of 63 programs entered. Although it would have won the first tournament, this time the strategy was manifestly trounced, finishing twenty-fourth. What happened was that people entered nastier strategies, which were designed to take advantage of nice guys. They knew Tit for Two Tats would retaliate only after two defections in a row, so these mean strategies simply defected every other round. They racked up a lot of points, and Tit for Two Tats could neither retaliate nor tempt the others to cooperate. This shows how contingent the game can be, the outcome changing with every unique constellation of players.
So who was the winner? Trusty old Tit for Tat, which once again showed how well it does against selfish players. If a player cooperates, so does Tit for Tat; if the player defects, Tit for Tat does, too. There’s little room to take advantage of it. The selfish player can’t get too far ahead, and Tit for Tat grabs a lot of points when playing other cooperators. Overall, it gets the highest point total.
In 1980 Axelrod came up with a different kind of competition. He wanted to use computers to simulate natural selection by modeling ecological encounters in nature. Participants in this ecological tournament formed a population that altered with each repetition of the game. The strategies that got the most points during Round One, for instance, would be rewarded with offspring: two or three other versions of themselves, all of which would participate in the next round. In this way one can build up entire populations of strategic cooperators or defectors. During successive rounds, winning strategies multiplied while less successful rivals died out.
This is the tournament that first got Karl Sigmund interested in the prisoner’s dilemma. Sigmund had been working on theoretical chemistry at the University of Vienna, studying the hypercycle, a system of self- reproducing molecules that might hold clues to how life evolved on Earth. I got very excited when I found the prisoner’s dilemma had meaning in evolutionary biology, says Sigmund. He got so excited, in fact, that he switched from studying self-replicating molecules to studying models of animal behavior.
For a mathematician, says Sigmund, whether you study molecules or the behavior of animals doesn’t matter. It all reduces to the same differential equations. Mathematically this is really one field: the population dynamics of self-replicating entities. They can be RNA molecules or reproductive strategies or animals preying on each other or parasites or whatever. Success determines the composition of the field, and the composition determines success. It is not easy to predict where this may lead.
Sigmund watched with interest as Axelrod played his ecological tournament out to the thousandth round. Again, Tit for Tat was the winner. At first glance this may appear to be paradoxical, but of course it isn’t, says Sigmund. When thrown into a hornet’s nest of inveterate defectors, a single Tit for Tat will do less well than the meanies, because it loses out on the first round before switching into tough-guy mode. But when playing itself or other nice strategies, Tit for Tat will do significantly better than such hard-liners as Always Defect, which can’t get more than one point per interaction with itself. The moral is that Tit for Tatters do best when they start interacting in clusters or families, Sigmund says. Kinship facilitates cooperation. In a mixture of Always Defect and Tit for Tat, even if only a small percentage of the population is using the nice policy, that policy will start reproducing itself and quickly take over the game.
Even as Tit for Tat was racking up one success after another, it was clear to Axelrod and Sigmund that the strategy had a fatal flaw. It has no tolerance for errors, says Sigmund. While computer programs interact flawlessly, humans and other animals certainly do not. In biological or human interactions, it is clear that sometimes the wires get crossed and you make a mistake about the identity of someone. You meet a friend and, not recognizing him, you defect. This is the Achilles’ heel of Tit for Tat, which is particularly vulnerable against itself.
A realistic version of Tit for Tat against Tit for Tat can fall into endless cycles of retaliation. Since all it knows how to do is strike back at defectors, one scrambled signal will send Tit for Tat spiraling into vendettas that make the Hatfields and the McCoys look tame by comparison. The average payoff for Tit for Tat drops by 25 percent if you introduce only a few such mistakes into the tournament. This is a terrible performance, Sigmund says, noting that a random strategy (unthinkingly defecting or cooperating in each round with equal probability) will do just as well.
The obvious way to break up this vicious cycle of grim retaliation consists in being ready, on occasion, to let bygones be bygones. We can even compute the optimal measure of forgiveness, says Sigmund. The Viennese Golden Mean--the extra dose of generosity that makes for the best of all strategies in a less-than-perfect world--is contained in the following rule: Always meet cooperation with cooperation, and when facing defection, cooperate on average one out of every three times. The strategy that embodies this rule is called Generous Tit for Tat.
The merits of Generous Tit for Tat were already known in the early 1980s, when a Swedish scientist named Per Molander computed the benefits of generosity in a world where pure Tit for Tat could sometimes make a mistake--like Oedipus meeting his father. Molander arrived at the figure of one-third through repeated rounds of trial and error. Molander’s finding does not mean that you should turn the other cheek to every third blow. Obviously it would be a big mistake to let your opponent know exactly when you were going to be nice. The number is just an average.
Generous Tit for Tat looked great on paper, but Sigmund wanted to know if it bore any resemblance to reality. Was there any proof that an evolving population--be it molecules, animals, or strings of numbers in a computer program--would actually adopt Generous Tit for Tat? We wanted to know whether Generous Tit for Tat was biologically relevant, Sigmund says. You could easily show it was the best strategy for a group, but would evolution necessarily lead toward this best strategy?
To answer the question, Sigmund and Martin Nowak organized a computer tournament of their own in 1991. Nowak had been a chemistry student at the University of Vienna when he heard Sigmund give a lecture on the prisoner’s dilemma. He found the problem so intriguing that he allowed himself to be kidnapped by Sigmund and converted into writing a dissertation on the mathematics of the game. Nowak finished the work in a year--a university record--before heading to Oxford, the mother church of theoretical biology.
Realizing how important mistakes can be to the prisoner’s dilemma, Sigmund and Nowak started playing the game with what are called stochastic strategies. Stochastic means random, and strategies such as these--Generous Tit for Tat being one of them--allow for reacting to one’s opponent with a degree of flexibility.
We had been experimenting with games where everyone was playing the same strategy and watching what would happen when a small minority came in and started playing something else, says Sigmund. Would they spread or be wiped out? Later we got the idea, why not try out our model with 100 different strategies, chosen at random to be more or less tolerant, more or less forgiving? Some would forgive one out of two times, some one out of five, and so on. And some, of course, would never forgive.
All Sigmund and Nowak needed to organize their tournament was a random number generator, something to tell the computer which two strategies to face off against each other at a given moment. But we didn’t have a random number generator, Nowak says. So I just made one. Nowak came to Vienna over Christmas, and the two men loaded their players, including Generous Tit for Tat, into Nowak’s laptop computer. Then they sat back and watched the players duke it out for thousands of generations. Winning strategies reproduced themselves. Losers got knocked out of the ring.
Everything was looking very bright, going just the way we expected, says Sigmund. The more generous strategies invariably gained the upper hand. But when Nowak got back to Oxford and started working on a proper computer, we discovered we had been using the wrong random number generator, so we weren’t really representing every possible percentage of retaliation. Nowak explains: The one I made was biased. Some strategies came up more often than they should have. Tit for Tat was one of them.
When Nowak fixed the problem, giving all the strategies a level playing field, things got out of hand. Instead of nice strategies taking over the game, evolution consistently veered to Always Defect, which we didn’t like at all, Sigmund says. I mean, nature doesn’t work that way. It wasn’t realistic.
After two black weeks of watching meanies invade their tournament, Nowak and Sigmund accidentally stumbled on the key to evolving cooperation. They noticed the game took a radically different turn with one small alteration. If a dose of Tit for Tat--just enough to establish a tiny enclave, a dose as small as one percent of the population--was added to the game at the start, then it flipped directions. Generous Tit for Tat is not strong enough to organize this emergence of cooperation, says Sigmund. What one needs is a kind of police force, a minority that helps by its very strictness to effect this move but that does not ultimately prove the best.
If you don’t have Tit for Tat but some other strategy, it cannot do it. It must be a very strict retaliator. But then after you have the switch toward cooperation, it is not Tit for Tat that profits. Its frequency goes up, but then it yields to Generous Tit for Tat. Tit for Tat is not the aim of evolution, but it makes it possible. It is a kind of pivot.
For 100 generations the Always Defect strategy dominates the population with what looks like inescapable ferocity; it looks so bad you almost give up hope. A beleaguered minority of Tit for Tat survives on the edge of extinction. But when the suckers are nearly wiped out and the exploiters have no one left to exploit, the game reverses direction. The retaliators spring back to life. The exploiters suffer crippling reverses. There was great pleasure, Sigmund recalls, in watching the Always Defectors weaken and then die out.
But the staunch Tit for Tatters are not the ones who ultimately win the game. They will lose out to their even nicer cousins, who exploit Tit for Tat’s fatal flaw of not being forgiving enough to stomach occasional errors. After 100 generations the game swings from nasty to nice, and after 300 generations it swings again to extra-nice, with Generous Tit for Tat so firmly established that no meanies can invade the game.
When every player is employing Generous Tit for Tat, reproducing itself without a worry in the world, the game has arrived at the end of history. Evolution has effectively stopped for these self-perpetuating nice guys.
This is a pleasant image, but Sigmund doesn’t believe such states last for long. Only in a limited strategy space do you reach the end of history. But the space can always be enlarged by adding strategies with more memory or other features. Organisms can build up experience by watching each other and sharing information among themselves. Evolution is certainly not going to stop. The number of possible strategies in this game is fantastic.
Nor is it a complete picture of the biological world. This is a very simplified model of confrontation, he says. Out of all the imaginable interactions in the world, very few reduce to the prisoner’s dilemma. It is not the most universal, or even the most common, design of interactions found in nature. But it’s so simple. It’s transparent. It’s like a haiku. The world is reduced to a couple of lines.
Yet it is satisfying to know this is one possible path by which cooperation and selflessness could have been established on this planet. So after publishing their findings in Nature last year, Sigmund and Nowak are already planning more tournaments. The problem that most interests me now is to get longer memory into play, says Sigmund. We would like to model how you can build up trust by repeated interactions, and this building up of trust is something that can happen only when one has longer memory. I think the strongest tendency in natural selection is toward evolving a better memory, but the message of Generous Tit for Tat is that it may also pay to forget.