If you ever visit a Las Vegas casino, look up. Hundreds of cameras cling to the ceiling like jet-black barnacles, watching the tables below. The artificial eyes are there to protect the casino’s income from the quick-witted and light-fingered. Until the 1960s, casinos’ definition of such cheating was fairly clear-cut. They only had to worry about things like dealers paying out on losing hands or players slipping high-value chips into their stake after the roulette ball had landed. The games themselves were fine; they were unbeatable.
Except it turned out that wasn’t true.
Mathematics professor Edward Thorp found a loophole in blackjack big enough to fit a best-selling book through. Then a group of physics students tamed roulette, traditionally the epitome of chance. Beyond the casino floor, people even scooped lottery jackpots using a mix of math and manpower.
The debate over whether winning depends on luck or skill is now spreading to other games. It may even determine the fate of the once lucrative American poker industry. In 2011, U.S. authorities shut down a number of major poker websites, bringing an end to the “poker boom” that had gripped the country for the previous few years. The legislative muscle for the shake-up came from the Unlawful Internet Gambling Enforcement Act. Passed in 2006, it banned bank transfers related to betting when the “opportunity to win is predominantly subject to chance.” Although the act has helped curb the spread of poker, it doesn’t cover stock trading or horse racing. So how do we decide what makes something a game of chance?
Poker Faces The answer turned out to be worth a lot to one man. As well as taking on the big poker companies, federal authorities had gone after people operating smaller games. That included Lawrence DiCristina, who ran a poker room on Staten Island in New York. The case went to trial in 2012, and DiCristina was convicted of operating an illegal gambling business.
DiCristina launched a motion to dismiss the conviction, and the following month, he was back in court arguing his case. During the hearing, DiCristina’s lawyer called economist Randal Heeb as an expert witness. Heeb’s aim was to convince the judge that poker was predominantly a game of skill and didn’t fall under the definition of illegal gambling. He presented data from millions of online poker games and showed that, bar a few bad days, top-ranked players won pretty consistently. In contrast, the worst players lost throughout the year. The fact that some people could make a living from poker was surely evidence that the game involved skill.
The prosecution also had an expert witness, an economist named David DeRosa. He didn’t share Heeb’s views about poker. DeRosa used a computer to simulate what might happen if 1,000 people each tossed a coin 10,000 times, assuming a certain outcome — such as tails — was equivalent to a win and the number of times a particular person won the toss was random. But the results were remarkably similar to those Heeb presented: A handful of people appeared to win consistently, and another group seemed to lose a large number of times. This wasn’t evidence that a coin toss involved skill, just that — much like the infinite number of monkeys typing — unlikely events can happen if we look at a large enough group.
Another concern for DeRosa was the number of players who lost money. Based on Heeb’s data, about 95 percent of people playing online poker ended up out of pocket. “How could it be skillful playing if you’re losing money?” DeRosa asked.
Heeb admitted, in a particular game, only 10 to 20 percent of players were skillful enough to win consistently. He said the reason so many more people lost than won was partly down to the house fee, with poker operators taking a cut from the pot in each round. But he didn’t think the apparent existence of a poker elite was due to chance. Although a small group may appear to win consistently if lots of people flip coins, good poker players generally continue to win after they’ve ranked highly. The same cannot be said for the people who are fortunate with coin tosses.
According to Heeb, part of the reason players can win is that in poker, players have control over events. If bettors bet on a sports match or a roulette wheel, their wagers don’t affect the result. But poker players can change the outcome of games with their betting. “In poker, the wager is not in the same sense a wager on the outcome,” Heeb said. “It is the strategic choice that you are making. You are trying to influence the outcome of the game.”
But DeRosa argued that it doesn’t make sense to look at a player’s performance over several hands. The cards dealt are different each time, so each hand is independent of the last. If one hand involves a lot of luck, there’s no reason to think that player will have a successful round after a costly one. DeRosa compared the situation to the Monte Carlo fallacy. “If red has come up 20 times in a row in roulette,” he said, “it does not mean that ‘black is due.’ ”
Heeb conceded that a single hand involves a lot of chance, but it didn’t mean the game was chiefly one of luck. He used the example of the baseball pitcher. Although pitching involves skill, a pitch is also susceptible to chance: A weak pitcher could throw a good ball, and a strong pitcher a bad one. To identify the best — and worst — pitchers, we need to look at many throws.
Processes we think are as good as random are usually far from it.
The key issue, Heeb argued, is how long it takes for the effects to outweigh chance. If it takes a large number of hands (that is, longer than most people will play), then poker should be viewed as a game of chance. Heeb’s analysis of the online poker games suggested this wasn’t the case. It seemed that skill overtook luck after a relatively small number of hands. After a few sessions of play, a skillful player could therefore expect to hold an advantage.
It fell to the judge, a New Yorker named Jack Weinstein, to weigh the arguments. Weinstein noted the law used to convict DiCristina — the Illegal Gambling Business Act — listed games like roulette and slot machines but didn’t explicitly mention poker. Though Weinstein said this didn’t automatically mean the game wasn’t gambling, the omission meant the role of chance in poker was up for debate. And Weinstein had found Heeb’s evidence convincing. On Aug. 21, 2012, he ruled that poker was predominantly governed by skill rather than chance and did not count as gambling under federal law. DiCristina’s conviction was overturned.
The victory was short-lived, however. Although Weinstein ruled DiCristina hadn’t broken federal law, the state of New York has a stricter definition of gambling. Its laws cover any game that “depends in a material way upon an element of chance.” As a result, the state law meant poker still fell under the definition of a gambling business.
The DiCristina case is part of a growing debate about how much luck comes into play in games like poker. Definitions like “material degree of chance” will undoubtedly raise more questions. Given the close links between gambling and certain parts of finance, surely this definition would cover some financial investments, too? Where do we draw the line between flair and fluke?
Luck Or Skill? It’s tempting to sort games into separate boxes marked “luck” and “skill.” Roulette, often an example of pure luck, might go into one. Chess, a game many believe relies only on skill, might go in the other. But it isn’t this simple. To start with, processes we think are as good as random are usually far from it.
Despite its popular image as the epitome of randomness, roulette was first beaten with statistics, and then with physics. Other games have fallen to the scientific method, too. Card counters have made blackjack profitable, and syndicates have turned lotteries into investments.
Moreover, games that we assume rely solely on skill do not. Take chess. There’s no inherent randomness in a game of chess: If two players make identical moves every time, the result will always be the same. But luck still plays a role. Because the optimal strategy isn’t known, there’s a chance a series of random moves could defeat even the best player.
Unfortunately, when it comes to making decisions, we sometimes take a rather one-sided view of chance. If our choices do well, we put it down to skill; if they fail, it’s bad luck. External sources can also skew our notion of skill. Newspapers print stories about entrepreneurs who hit a trend and make millions, or celebrities who suddenly become household names. We hear tales of new writers who pen instant best-sellers and bands that become famous overnight. We see success and wonder why those people are so special. But what if they’re not?
In 2006, Matthew Salganik and his colleagues at Columbia University published a study of an artificial “music market,” in which participants could listen to, rate and download dozens of different tracks. In total, there were 14,000 participants, whom the researchers secretly split into nine groups. In eight of the groups, participants could see which tracks were popular with their fellow group members. The final group was the control group, in which the participants had no idea what others were downloading.
The researchers found that the most popular songs in the control group — a ranking that depended on the merits of the songs themselves, not on what other people were downloading — weren’t necessarily popular in the eight social groups. In fact, the song rankings in these eight groups varied wildly. Although the “best” songs usually racked up some downloads, mass popularity wasn’t guaranteed.
Instead, fame developed in two stages. First, randomness influenced which tracks people picked early on. The popularity of these first downloaded tracks was then amplified by social behavior, with people looking at the rankings and wanting to imitate their peers.
If we want to separate luck and skill in a given situation, we must first find a way to measure them.
Mark Roulston and David Hand, statisticians at the hedge fund Winton Capital Management, point out that the randomness of popularity may also influence investment funds’ rankings. “Consider a set of funds with no skill,” they wrote in 2013. “Some will produce decent returns simply by chance and these will attract investors, while the poorly performing funds will close and their results may disappear from view. Looking at the results of those surviving funds, you would think that on average they do have some skill.”
The line between luck and skill — and between gambling and investing — is rarely as clear as we think. Lotteries should be textbook examples of gambling, but after several weeks of rollovers, they can produce a positive expected payoff: Buy up all the combinations of numbers, and we’ll make a profit.
Sometimes the crossover happens the other way, with investments being more like wagers. Take Premium Bonds, a popular form of investment in the United Kingdom. Rather than receiving a fixed interest rate as with regular bonds, investors in Premium Bonds are entered into a monthly prize draw. The top prize is 1 million pounds, tax-free, and there are smaller prizes, too. By investing in Premium Bonds, people are, in effect, gambling the interest they would have otherwise earned. If they instead put their savings in a regular bond, withdrew the interest and used that money to buy rollover lottery tickets, the expected payoff wouldn’t be that different.
Luck as a Statistic If we want to separate luck and skill in a given situation, we must first find a way to measure them. But sometimes an outcome is sensitive to small changes, with seemingly innocuous decisions completely altering the result. Individual events can have dramatic effects, particularly in sports like soccer and ice hockey, where goals are rare. It might be an ambitious pass that sets up a winning shot or a puck that hits the post. How can we distinguish between a hockey victory that’s mostly down to talent and one that benefitted from lucky breaks?
In 2008, hockey analyst Brian King suggested a way to measure how fortunate a particular NHL player is. “Let’s pretend there was a stat called ‘blind luck,’ ” he said. To calculate his statistic, he took the proportion of total shots that a team scored while that player was on the ice and the proportion of opponents’ shots that were saved, and then added these two values together. King argued that although creating shooting opportunities involved a lot of skill, there was more luck influencing whether a shot went in or not. Worryingly, when King tested the statistic on his local NHL team, it showed the luckiest players were getting contract extensions while the unlucky ones were being dropped.
The statistic, later dubbed “PDO” after King’s online moniker, has since been used to assess the fortune of players — and teams — in other sports, too. In the 2014 World Cup, several top teams failed to make it beyond the preliminary group stage. Spain, Italy, Portugal and England all fell at the first hurdle. Was it because they were lackluster or unlucky? The England team is famously used to misfortune, from disallowed goals to missed penalties. It seems that 2014 was no different. England had the lowest PDO of any team in the tournament, with a score of 0.66.
We might think teams with a very low PDO are just hapless. Maybe they have a particularly error-prone striker or a weak keeper. But teams rarely maintain an unusually low (or high) PDO in the long run. If we analyze more games, a team’s PDO will quickly settle down to numbers near the average value of 1. It’s what Francis Galton called “regression to mediocrity” — if a team’s PDO is noticeably above or below 1 after a handful of games, it’s likely due to luck.
Skill as a Statistic Statistics like PDO can be useful to assess how lucky teams are, but they aren’t necessarily helpful when placing bets. Gamblers are more interested in making predictions. In other words, they want to find factors that reflect ability rather than luck. But how important is it to actually understand skill?
Take horse races. Predicting events at a racetrack is a messy process. All sorts of factors could influence a horse’s performance in a race, from past experience to track conditions. To pin down which factors are useful, syndicates need to collect reliable, repeated observations about races. Hong Kong was the closest American gambler Bill Benter could find to a laboratory setup, with the same horses racing on a regular basis on the same tracks in similar conditions.
Using his statistical model, Benter identified factors that could lead to successful race predictions. He found that some came out as more important than others. In his early analysis, for example, the model said that the number of races a horse had previously run was a crucial factor when making predictions. In fact, it was more important than almost any other factor. Maybe the finding isn’t all that surprising. We might expect horses that have run more races to be used to the terrain and less intimidated by their opponents.
It’s easy to think up explanations for observed results. Given a statement that seems intuitive, we can convince ourselves why that should be the case, and why we shouldn’t be surprised at the result. This can be a problem when making predictions. By creating an explanation, we’re assuming that one process has directly caused another. Horses in Hong Kong win because they are familiar with the terrain, and they are familiar with it because they have run lots of races. But just because two things are apparently related — like probability of winning and number of races run — it doesn’t mean that one directly causes the other.
An often-quoted mantra in the world of statistics is that “correlation does not imply causation.” Take the wine budget of Cambridge colleges. It turns out that the amount of money each Cambridge college spent on wine in the 2012-2013 academic year was positively correlated with students’ exam results during the same period. The more the colleges spent on wine, the better the results generally were.
Similar curiosities appear in other places, too. Countries that consume lots of chocolate win more Nobel prizes. When ice cream sales rise in New York, so does the city’s murder rate. Of course, buying ice cream doesn’t make us homicidal, just as eating chocolate is unlikely to turn us into Nobel-quality researchers, and drinking wine won’t make us better at exams.
In each of these cases, a separate underlying factor could explain the pattern. For Cambridge colleges, it could be wealth, which would influence both wine spending and exam results. Or a more complicated set of reasons could lurk behind the observations. This is why Benter doesn’t try to interpret why some factors appear to be so important in his horse racing model. The number of races a horse has run might be related to another hidden factor that directly influences performance.
Alternatively, there could be an intricate trade-off between races run and other variables — like weight and jockey experience — which Benter could never hope to distill into a neat “A causes B” conclusion. But Benter is happy to sacrifice elegance and explanation if it means having good predictions. It doesn’t matter if his factors are counterintuitive or hard to justify. The model is there to estimate the probability a certain horse will win, not to explain why it will win.
From hockey to horse racing, sports analysis methods have come a long way in recent years. They have enabled gamblers to study matches in more detail than ever, combining bigger models with better data. As a result, scientific betting has moved far beyond card counting.
Excerpt from The Perfect Bet: How Science and Math Are Taking the Luck Out of Gambling by Adam Kucharski. Available from Basic Books, a member of the Perseus Book Group. Copyright © 2016.