We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

How Human Brains Outwit Supercomputers

IBM's supercomputer may have defeated the Jeopardy! champs, but with talents such as bluffing, lying, and intuition, humans trounce computers in many other matchups. 

By Andrew Moseman
Mar 1, 2012 6:00 AMNov 12, 2019 5:13 AM
poker.jpg
Mr. SUTTIPON YAKHAM/Shutterstock

Newsletter

Sign up for our email newsletter for the latest science news
 

Last year the machines finally beat us at our own game. IBM’s megacomputer, Watson, creamed the hominid competition at the quirky, punny, idiosyncratic Jeopardy! This contest, calling on such skills as language, grammar, and wordplay, is among the most human of games—much more so than the mathematical system of chess, which IBM’s Deep Blue mastered in the 1990s. Does that make Jeopardy! a machine’s greatest gaming challenge? Not quite, says ai (artificial intelligence) researcher and computer science professor Bart Massey of Portland State University in Oregon. Checkers, chess, Scrabble, bridge, backgammon, poker, Stratego, and more—software designers are scrambling to create systems to crack each one. The history of competitive computers is the history of overestimating the rise of the machines and underestimating the strength of the human brain. Watson may have triumphed, but computers still lag behind the best human players in many of our favorite games.

The Grid: Where Computers Reign

Tic-tac-toe and Connect Four have small, two-dimensional playing areas, and the rules are simple: Players take turns in an attempt to place three or four marks in a row, while preventing an opponent from doing the same. This simplicity makes these kids’ games, which humans tend to tire of as they grow older. But they are perfect for computers.

There’s no hidden info or luck involved in tic-tac-toe, Massey writes in his notes on AI and games. Most importantly, a computer can do what a child cannot: simulate every possible outcome and choose the perfect move. As a result, tic-tac-toe and Connect Four fall into the category of solved games. Computers play them perfectly. You lose.

But what happens when the board gets bigger and the rules more complex?

When We Were Kings

Checkers, played with pieces that move and interact on a larger 8-by-8-inch board, presents a much greater computing challenge. Jonathan Schaeffer, a University of Alberta professor who helped create a poker-playing AI, has worked since 1989 to design a checkers champion. Years before Deep Blue faced off against chess master Garry Kasparov, Schaeffer’s Chinook program matched wits with checkers master Marion Tinsley. Considered by some the greatest player of all time, Tinsley lost only three competitive games between 1950 and 1992 among the thousands he played, Schaeffer says. The man won his first high-profile match against the machine in 1992, but by their second world championship bout, it looked like a real contest: They played six games, fighting each other to a draw each time. Tinsley resigned from the match. He was ailing, and died the following year.

If you need a yardstick by which to measure the complexity of checkers, consider that 13 years passed between the second Tinsley championship match—when Chinook took on the man Schaeffer described as “as close to perfection as you could imagine in a human” and played him to a standstill—and Schaeffer’s grand victory over the game itself, when Chinook became able to calculate every possible series of moves from every possible position. Checkers has about 5 x 10^20 possible positions, and Schaeffer knew that the machine would not be able to finish the computation with sheer brute power within his lifetime.

So his team sought a shortcut, turning to better algorithms to sort through the positions, rather than waiting around for a next-generation supercomputer to land in their laps. The algorithms cut down the number of positions that Chinook needed to examine from 10^20 to 10^14, allowing Schaeffer finally to complete the work in 2007. Not too long to wait for perfection.

Checkmate?

Watson gets compared to Deep Blue, IBM’s chess-playing program, but in many ways, they couldn’t be more different. Where Watson must understand human language, chess is written in the computer’s mother tongue—math and probability.

But chess is a complex game to master, even for a machine. Each type of chess piece may move a particular direction and distance, where checkers differentiates solely between regular and kinged pieces. Chess uses all 64 squares on the board, where checkers uses half as many. Chess is an avalanche of digits fit for a computer, and in 1997, Deep Blue defeated Russian grand master Garry Kasparov in a six-game match. If the machine whopped the man back when Intel Pentium IIs first hit the market, AI must now be close to solving the game, right?

“Absolutely not,” Schaeffer says. “There is no way chess can be solved with modern technology. Even if you gave me a million computers, and I were using them 24 hours a day, 7 days a week, I could not solve chess.”

A modern chess AI could beat your brains out—and Deep Blue’s, too. But researchers can’t even calculate the number of possible positions in chess, Schaeffer says. It’s probably somewhere between 10^40 and 10^50—at least 100,000,000,000,000,000,000 times more complex than checkers.

And chess isn’t even the trickiest grid game. In the Chinese game go, two players attempt to encircle each other by placing stones at the intersections of a 19-by-19-line grid. AI expert Bart Massey says go computers struggle to match what a brain can do. The best human performers almost always beat the strongest computers, even when the machine gets a large handicap.

The Stratego Strategem

In one sense, chess is easy for a computer—there’s a right and wrong answer in every case, points out Imer Satz, a retired software manager and computer hobbyist. People may disagree on what the best move is, Satz says—“but that just means that somebody’s right and somebody’s wrong, or they’re all wrong.” He programs for a much tougher challenge: Stratego, in which there is no “correct” move.

On the surface the game seems like chess: Two players line up rows of attackers across from each other on a square board and attempt to capture the flag of their opponent—the one move that will end the game.

But chess is an open book, with pieces and their positions out there for all to see. Stratego is shadowy. You watch your opponent move a piece, but you have no idea which piece it is. A marshal, the highest-ranked attacker? Or a miner, a defenseless piece used to defuse bombs players place around flags? The only way to tell without attacking it is to infer your opponent’s mind-set and try to plumb his or her strategy.

Computers, with no experience being human, struggle to read minds. So Probe, Satz’s Stratego-playing AI, understands human tendencies instead—those situations in which humans are statistically most likely to bluff. It knows that players often act aggressively with a weak piece to try to mislead an opponent. It knows—from watching humans play, and because Satz told it so—that most players prefer to have strength on their right sides rather than their left (perhaps an effect of our species’ tendency to be right-handed, a quirk computers do not share). And, most dangerously, it remembers what you like to do and how you like to arrange your pieces. “Some people have played several thousand games against it,” Satz says. “Probe is able to remember all the setups those opponents used, which is devastatingly strong information.”

Even so, Probe, which dominates other Stratego AIs in computer-versus-computer tournaments, can only match the skill of average human players. The AI challenge of dealing with incomplete information is too great.

Liars’ Poker

Unlike our species, computers find it difficult to dissemble. Probe may understand your machinations, but “it has a much more difficult time bluffing itself, because a bluff is a lie,” Satz says. “It is really, really hard to program a computer to pretend.”

Deception is crucial in games built on incomplete information. Poker, where dominant players tend to be unpredictable, is the perfect example. “If you’re so risk-averse that you never take a chance, your opponent will discover that, and that’s it,” Satz says. “If you’re the kind of person who ignores all risk, then you’re dead as well.”

To coach a computer to beat a human cardsharp, Schaeffer and the poker ai team at the University of Alberta simply sidestepped this problem. The program computes a probability distribution for each move, and that distribution of possible actions includes what we would call bluffing. The machine doesn’t interpret a hefty raise on a 2 and 7 as a bluff, intended to scare off the other players. It sees the raise as one choice in its array of options, which it will pick once in a while—specifically, the optimal number of times needed to stay unpredictable without losing all its money on bad bets. The machine can “lie” without needing to know human psychology.

Still, poker programs are not master gamblers. The University of Alberta programs dominate even great players in limit Texas Hold’em, where betting is constrained. But when you take the training wheels off and play no-limit, humans win. When the bets can be any value, the machines cannot cope computationally with all the possible betting scenarios.

Word Problems

The Jeopardy! problem, put simply, is this: language. Although programmers could stock Watson’s databases with more texts, encyclopedias, and charts than anyone could possibly read, computers struggle to understand the “answers” offered by the game and have even more trouble with wordplay, which is involved in many Jeopardy! clues. For the same reason, crossword puzzles were the hot challenge 10 to 15 years ago, Massey says. There are myriad ways to fill in the board with letters, but only one that fits the clues. And like Jeopardy! answer writers, crossword clue writers are given to puns and bad jokes.

Scrabble presents perhaps the most intriguing word game challenge for a computer. Its board seems to be perfect for computer domination; just give the AI a dictionary and it should crush its human competitors. But Scrabble is more than a vocabulary contest. A Scrabble endgame—the last few turns, where contests at the highest level are frequently decided—is won by tacticians, not copy editors. The best competitors play not only to score points but also to block letters that might benefit their opponents. They do this thanks to a pretty good idea of which letters their opponents still hold, based on their play during the game, and knowledge of what a skilled player should do: “If she had the x and the q, she would’ve played quixotic on that triple word score.” That’s a logical leap based on reasoning rather than enormous data banks of words, Massey says.

Multiplayer Meltdown

Deception, intuition, and mastery of language give us advantages that computers can’t touch. But there are other reasons why we have not yet been beaten by our machines: Computers falter as the number of players grows. Most of the games programmers have worked on are two-player, zero-sum games like chess. Multiplayer games add too many variables. Schaeffer’s poker systems, for instance, don’t play well against more than one opponent.

Risk is another game at which computers don’t do well, Massey says. “In Jeopardy! you don’t really worry much about strategic stuff, you just try to get as many points as you can,” he says. “You can’t play Risk like that. You have to think not only about how you’re going to beat your opponents, but how your opponents are going to interact with each other.”

Minds Like Us?

It might seem trivial to obsess about whether a computer has better game than a brain. But there are real implications. A machine connected to all the world’s knowledge that could understand a request in plain English could be almost unimaginably powerful and useful. IBM has emphasized its AI’s potential to revolutionize medicine, research, and other fields far beyond fun and games.

Nobody knows the power of the human mind the way programmers and AI experts like Schaeffer, Massey, and Satz do—they’ve invested countless hours trying to match and surpass human abilities, and they’ve come up against tactics like deception that machines struggle with and humans pull off with ease. They have also come to appreciate the limitations of the human mind. “You should never make an assumption that a computer program should operate on the same principles as a human,” Satz says. “I don’t think it’s valid to make the starting assumption that the way humans have always played a game is necessarily the best way.”

Computers can execute many more calculations per second than we can, memorize more openings and endgames, and evaluate many more options. With such superhuman mathematical abilities, a machine plays in ways human players might reject as worthless or never consider at all. As a result, gaming computers aren’t just trying to beat us. They’re teaching us.

“Computers aren’t bound by human preconceptions,” Schaeffer says. “We’ve seen this in chess, and checkers, and bridge, and backgammon. Computers have revolutionized how humans play, because the computer doesn’t have all the human baggage—the biases that we’re taught. It comes to these games with fresh eyes.”

Additional reporting by Adam Hadhazy

[This article originally appeared in print as "Advantage: Human."]

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.