Mathematician Stanislaw Ulam is best known for his pivotal role in developing the hydrogen bomb in the 1940s and 1950s, but that’s not what put him in the record books. The government laboratory in Los Alamos, New Mexico, got hold of one of the first computers, MANIAC I, so that Ulam and the other H-bomb researchers wouldn’t have to stay up nights solving their voluminous equations with pencil and paper. Ulam, who described himself modestly as a fair chess player, couldn’t resist putting the machine to work on a project of somewhat less import to cold- war strategy. Together with physicist Paul Stein, he wrote one of the first chess-playing programs.
Ulam’s program was a mediocre player at best. Even though it considered a mere two moves in advance, it would still have required an hour of computation each turn had Ulam not also simplified the game by removing two bishops and two pawns from each side and reducing the size of the board from 64 squares to 36. With this help, in 1956 the program just managed to eke out a victory against one of Ulam’s colleagues, becoming, 40 years ago, the first program to beat a human, albeit a rank beginner, at a stripped-down version of chess.
Chess has always been thought of as the quintessential thinking man’s game, so the success of Ulam’s program raised profound questions that are still with us today about what we humans do when we think and whether machines could ever do this, too. The field of artificial intelligence was in its infancy then, and Herbert Simon, an early AI practitioner and now a Nobel laureate at Carnegie Mellon, even ventured to predict in 1957 that a computer would beat the world chess champion within a decade. Simon, of course, was way off. Today human grand masters still reign, as world chess champion Garry Kasparov proved in February by defeating MANIAC I’s descendant, Deep Blue, in a six-game match in Philadelphia.
In terms of raw speed, Deep Blue is a technological marvel that far surpasses anything Ulam or Simon might have imagined back in the 1950s. Custom-built for chess by a team of IBM scientists, Deep Blue has 32 microprocessors that give it the ability to look at 200 million chess positions each second. But brute computation is not what human grand masters use to approach the game. Studies have shown that in a typical position, a strong human player considers on average only two moves. In other words, the player is choosing between two candidate moves that he intuitively recognizes, based on prior experience, as contributing to the goals of the position.
Initially programmers tried to mimic this human approach to chess, but they did not get very far building intuition and pattern recognition into machines. In Deep Blue, sheer computational power--move- by-move, tit-for-tat analysis--makes up for lack of intuition. Though it seems a cold, inhuman approach, brute calculation can produce surprisingly humanlike chess. What made the Kasparov-Deep Blue match so exciting, in fact, was that observers of the event allowed themselves to forget momentarily that such a superfast machine is merely an oversize calculator. When Deep Blue took the first game decisively, the assembled crowd in the Philadelphia convention center was bowled over. As Kasparov and Deep Blue went deadlocked into the fifth game, Kasparov’s breathless comment that the computer appeared to be thinking strategically brought the tension to its height. The last two games, however, proved to be an anticlimax. Kasparov won both, the last so decisively that observers talked about the computer’s appearing confused.
Is it possible that brute force programming might have produced a machine capable of thought as well as befuddlement? Although most researchers think not, such talk, ironically, has given techno-optimist Simon, for one, a new lease. The human method, he asserts, is not really much different from the kinds of shortcuts Deep Blue’s creators have programmed into the machine to keep it from drowning in calculations. A human relies on memory when matching a board position with a proven strategy, and Deep Blue does something similar with board positions held in its memory. Instead of intuition, it uses rules of thumb, known in computer science as heuristics, that endow it with a knowledge of chess and keep it from pursuing unprofitable avenues. Is human thinking just heuristics? Simon asks. I’d say yes, it is.
Regardless of the inner workings of the brain, we have fun playing board games because they are too complex to figure out from beginning to end but not so complex as to overwhelm our intuition. Computers, by and large, are in the same boat. Like drivers in a fog, both human and machine see only so far. Against a player who could see, say, hundreds of moves ahead, most challengers would quickly find themselves at a point where the best they could do was come to a draw. Against a computer big enough and fast enough to be able to calculate all possible moves clear to the end of the game--about 10120, or a one with 120 zeros after it-- there’d be little point in playing at all. Chess would then be, in the parlance of mathematicians and computer scientists, solved. Playing it would be like playing tic-tac-toe: no matter what the first move, the worst you could do is draw.
Most board games are complex enough to keep present-day computers from seeing all the way to the end, at least so far. In chess, for example, a player has the option to make on average about 38 different legal moves. To think 2 moves ahead, as Ulam’s program did, a computer has to consider, for each of those possible moves, each and every possible countermove its opponent might make. Then for each of those countermoves it has to evaluate each and every possible move it might make in response, and each and every one of its opponent’s counter-countermoves. Even with heuristics to eliminate some of the obviously stupid options, such a procedure quickly gets out of control. Ulam’s program, looking 2 moves ahead on its six-by- six board, had to analyze 160,000 board positions, which took about ten minutes. Looking ahead 3 moves would have required 64 million positions and several hours of computation. To see 13 moves in advance, Deep Blue had to analyze roughly a hundred billion billion moves.
Even such brawn has its limits. In the course of a game, the time eventually comes when a computer has to stop calculating and make a decision. For this job, it needs a way of evaluating the different board positions it might reach. This is called scoring, and it is the part of the program that the designer jealously guards, because it is the repository for the knowledge and expertise that go into making a successful program.
A few years ago, David Levy, an international chess master in London, decided to see just how far game-playing computers had progressed. He organized a Computer Olympiad in London, where different programs played elimination rounds against each other, and then, as a showcase, the best program played the best available human player. He found that among different games, computers had achieved varying standards of play. Among the solved games he eliminated from his event was Connect 4, in which players take turns dropping white or black balls into seven tubes, with the aim of getting four to line up in a row; solving it takes a mere 10 trillion calculations (whoever moves first wins). Other games of similar complexity that had also been solved included Qubic, nine-men’s morris, and awari. Most other games appeared safe, at least for the time being. Chess, despite Deep Blue’s successes, may be too complex to solve for many years to come, if at all. Even the far simpler games of checkers, backgammon, and dominoes are still just beyond the scope of present-day computers, though they are pretty close to being cracked. Because of their exhaustive memories, computers can muster a decent game of Scrabble, but they still cannot beat the best human players. By contrast, the Japanese game of go, played on a 19-by-19 board, is so mind-bogglingly complex that the best computer program plays only at the level of an intermediate amateur. The same holds true for bridge, but for different reasons. Since a bridge- playing computer has to deal with uncertainty, its brute force methods hardly work, and it has little choice but to make a poor effort at emulating the subtleties of human strategy.
After four successful competitions, held from 1989 to 1992, Levy opted to bow out of his role of organizer. Until some computer-games aficionado comes forward to fill Levy’s shoes, plans for a fifth Olympiad are up in the air. What follows, though, is a look at where computers stand at four classic games.
Checkers
After seeing Garry Kasparov beat Deep Blue’s immediate predecessor, Deep Thought, back in 1989, Jonathan Schaeffer, a computer science professor at the University of Alberta in Edmonton, resolved to do for checkers what his colleagues at IBM were doing for chess. He set to work on Chinook, a checkers-playing program, and made it his goal to take on the world’s best checkers player.
At the time, this title had belonged for more than 40 years to Marion Tinsley, a mathematics professor at Florida A&M; University in Tallahassee. Indeed, he had beaten so many of his opponents so easily that most checkers experts considered him a one-of-a-kind genius. Tinsley first played Chinook in 1988, and all four games were drawn. He won a rematch in 1990, one game to Chinook’s none, with 13 games ending in a draw. Still, Tinsley was sympathetic to Schaeffer’s efforts and expressed interest in the technology behind Chinook. Schaeffer went back and reworked the program entirely, and in 1992 a rematch was played. Tinsley won, four games to two, with 33 draws. Schaeffer spent another two years tinkering with Chinook, and in 1994 he brought the program to the Computer Museum in Boston to play Tinsley once again. It was to be the last time. Tinsley and Chinook played six games, each of them ending in a draw, when Tinsley resigned the match for health reasons. The next week he was diagnosed with cancer. He died a year later.
Chinook has already beaten the current champion, Ron King, seven times without dropping a match, with another nine matches ending in draws. The logical next step for Schaeffer is to solve the game once and for all. Chinook already has about half a trillion board positions solved, squirreled away in its database. This means that when Chinook is playing a game and it sees one of these half trillion boards, it knows it can play a flawless game from then on. We play a lot of games where on move five of the game, Chinook gets to one of the positions in its database and announces it’s a draw, says Schaeffer. Then it’s a matter of whether or not the human makes a mistake. To solve the game, Chinook needs to increase its database of board positions by a factor of a billion, for a total of half a billion trillion board positions, which would constitute all of the possible positions in the game.
Schaeffer believes that this computational task is by no means out of reach. It would take only a year or two on a network of small computers, or perhaps a few days or weeks on a supercomputer. It’s hardly worth the trouble when in a few years I’ll be able to buy a personal computer that can do it, he says. The real problem, though, is that Chinook’s quasi victory over Tinsley has left Schaeffer unsatisfied. To be honest, when Tinsley died, I lost all motivation, he says. This guy was just about a perfect player, and I wanted to beat him.
Go
I love the game of go, says Keh-Hsun Chen. I’ve played it since I was a little child. By the time he got to the University of North Carolina at Charlotte, the computer scientist had reached the sixth dan-- the highest ranking for an amateur go player. In 1988 he decided to program a computer to play the game.
At first glance, go is deceptively simple--it has fewer rules and is certainly easier to learn than chess. Go players, white and black, take turns placing their stones one by one on a 19-by-19 square board. The object is to arrange your stones in such a way that they enclose the biggest area. Despite its simplicity, it is the most complex of the deterministic board games--those games, such as chess and checkers, whose results are knowable in principle, as opposed to games of chance, such as roulette, or such games as bridge in which each player has incomplete information about the state of the game. To solve go, a computer would have to analyze an absurd number of board positions--a number with 172 zeros.
There is no way computers will ever solve this game. That’ll never happen, Chen says flatly. No matter how fast computers get, they will still be limited by the signals inside them, which cannot exceed the speed of light. No matter how small the computer, the signal will still have to travel some distance, however minuscule, to complete a step in the calculation. And since it would have so many steps, he says, there’d be no hope of ever being able to complete it in a reasonable period of time. Defeating the best human would be much easier, but even this Chen thinks won’t happen soon.
Chen knew all this when he set out to devise his program. He used pretty much the same principles that were tried in chess: his program anticipates as many moves as it can in a reasonable time span and then follows rules to evaluate the resulting board positions. He also included some rules to keep his computer from being distracted by fruitless possibilities, and some special rules to get his program out of specific situations. His first serious program, Go Intellect, captured first place in Levy’s Computer Olympiads.
Although Go Intellect can beat beginners, it loses to experienced amateurs. It often shines well into the game, particularly in tight spots. Say white is trying to encircle a bunch of black’s stones, with the aim of capturing that area of the board. Every time white tries to close the circle around black, black thwarts its opponent by adding another stone to a snakelike string of stones that seems to try to keep its head from being trapped inside. Such a string is called a ladder, and the chase can go on so long that the ladder stretches most of the way around the board. In these circumscribed situations, in which the objective--to capture or avoid being captured--is clear, computers seem to be much better than humans.
Bridge
In go, the pieces are on the board for all to see. When it comes to bridge, however, no single player knows precisely how the cards are distributed around the table. And even though the four players in a bridge game are paired off into opposing teams, a player doesn’t even know the hand of his partner sitting across the table; at best he can only pick up on a few discreet hints. Without being able to know what the computational problem is, a computer is starting off on poor footing indeed.
Tom Throop understands the difficulties involved in programming a computer to play around these uncertainties because he’s been perfecting his program, Bridge Baron, for 40 years now. An avid bridge player and an electrical engineer, Throop took up programming while in the Navy in the 1950s, worked at several defense electronics firms, and eventually formed his own company, Great Game Products. According to David Levy, Bridge Baron is the best bridge-playing program in the world, which is to say it’s pretty awful. There are no good bridge programs, Levy says. Throop admits that although his program can beat about 99 percent of all bridge players, among the 200,000 or so members of the American Contract Bridge League, who take the game seriously, it is decidedly below average.
Since you can’t get very far in bridge with brute force calculations, Throop’s program is a mishmash of rules of thumb, which he’s added to and refined over the years. To help it deal with the monumental uncertainty of the typical bridge hand, he’s also thrown in a bit of probability--There’s more than there used to be, he says, but probably not as much as there should be. Still, his program tends to deal with particularly sticky situations by avoiding them entirely. For instance, most bridge hands involve bidding, in which players establish a target for the coming hand that allows them to gain the maximum number of points. In the course of bidding, players can’t help but give away information about their hand, which the computer can use to its advantage. Occasionally, however, a team will refuse to bid, putting Bridge Baron in a quandary. When a team bids artificially--that is, deceitfully--merely to confuse its opponent, it’s a tactic wholly beyond the capabilities of computer programs, Bridge Baron included. The more artificial the bidding is, the more difficult it is to handle, Throop says. Our program tends to ignore it. Maybe it should try to interpret it. But how?
Good bridge players are annoyed by the propensity of bridge programs to approach each move as if it were the first. Most programs say, ‘What do I play now?’ Then they come back on the next move and say, ‘What do I play now?’ They have to recalculate it each time, says Throop. My program is guilty of this, too. To make matters worse, the particularly brisk and steady pace makes bridge especially difficult to program for. Unlike chess, in which a player can sit and ponder a single move at length, bridge has a tempo that leaves only about 10 or 20 seconds for each move.
Throop continues to improve Bridge Baron’s level of play, but he bemoans his lack of resources. With more money, he thinks he could develop a program that could beat the world’s best human bridge player. Says Throop: I’d like to call IBM and say, ‘Okay guys, you spent $100 million on chess, and you’ve succeeded. Now why not take on a harder problem?’
Chess
As the date of the match with Garry Kasparov approached, C. J. Tan, the leader of the IBM engineers who were developing Deep Blue, turned to chess grand master Joel Benjamin for advice on fine-tuning their computer to play better chess. To their surprise, once the match started, they found that more than a little fine-tuning was needed. We underestimated the amount of chess knowledge we’d need to beat somebody like Garry, Tan says.
Their oversight underscored the degree to which even a big fire- breathing supercomputer like Deep Blue, which is capable of more brute force number crunching than any chess machine in history, still needs to resort to rules of thumb to achieve a high level of play. Deep Blue’s weaknesses lie in the part of its programming that evaluates the relative strength of different positions on the board. Since the computer does not play with any strategy per se, it relies on the accuracy of these assessments to guide it. Kasparov, with his uncanny ability to suss out weaknesses in his opponent and capitalize on them, switched in the last two games of the match to an atypically nonconfrontational style of play, forcing Deep Blue into misjudgments. According to Tan, Kasparov succeeded in tricking the machine into thinking he was making bad moves, when in fact he was merely strategizing further ahead than the machine could calculate.
Since the match, Tan has been busy imbuing Deep Blue with more chess knowledge and working to increase its computational power. It can use all the brawn it can get: looking one additional move ahead requires a sixfold increase in processing speed.
Nobody, including Kasparov, questions the inevitability of a computer eventually beating the best human player in chess. The question is when it will happen. Tan and company, of course, hope the day will come when their new and improved Deep Blue defeats Kasparov in a rematch. But Levy thinks that day is far away. I still think it’ll be 15 years before a program can beat Kasparov, he says. In the final games, he had clearly figured out the computer’s weakness. If they had played another ten, Kasparov would have won decisively.