The son of a dressmaker and a professional gambler, Geoffrey West was born just after the outbreak of World War II and raised in relative poverty in postwar England. From those humble beginnings, he went on to a brilliant career in theoretical physics, eventually helping found the Elementary Particles and Field Theory group at Los Alamos National Laboratory in New Mexico in 1974. There he investigated some of the deepest mysteries in physics, including the underlying structure of protons and neutrons, the particles that make up the nuclei of atoms.

Then in the 1990s, at an age when many researchers begin downshifting their careers, West embarked on a brand-new quest, seeking the universal laws that govern biology. After joining the Santa Fe Institute, which focuses on the interdisciplinary study of complex systems, he went even further afield, using mathematical models to investigate the fundamental organization of cities, with surprising results. He and a group of collaborators discovered that simply knowing the population of any given urban area allowed them to accurately predict the nuanced details of its infrastructure and its socioeconomic state. Give West the raw census data from your city—regardless of its history or geography—and he can tell you everything from the number of gas stations in it to the number of patents produced by its inhabitants.

In his offices at the Santa Fe Institute in Santa Fe, New Mexico, where he served as president for four years, West recently sat down for an extended conversation with DISCOVER staff writer Veronique Greenwood. They discussed his history of tackling questions outside his field, the fundamental laws that govern cities, and his belief that population overload is draining global resources, steering us toward socioeconomic collapse.

You are an unusual academic omnivore: You have mixed physics and math with biology and population studies. How do you link those fields together in your head? When I was 10 or 11, I used to go walking on the big chalk cliffs south of London and look across the English Channel. On the horizon I could see the ships getting smaller and smaller. Then, when I was learning trigonometry a few years later, I encountered a remarkable problem: If you know the radius of the Earth, and you’re standing on the edge of the ocean, how far away is the horizon? I learned there was a formula that I could use to calculate how far away I could see. I thought: “My God. This is powerful stuff.” The rest of my life I have been trying to do that in some sense. So when I look out the window now and see the city and the landscape around us, I ask, “Can we put any of this into mathematics, and can we predict anything about it?”

You moved to America and went to California for graduate school. Was that a shocking change? When I first saw California, it was extraordinary. Because I came from old, black, dark England, still recovering from World War II. I grew up with bomb sites everywhere. And there still were bomb sites in 1961 when I left and went to Stanford. There I got involved in theoretical work on subatomic particles.

Early in your career, you cofounded the high-energy physics group at Los Alamos. What was that like?

We had some amazing people there. Many of us played an important role in helping to develop what became known as the Standard Model of physics, which is our best mathematical description of the fundamental forces and particles. And because of the huge computing power at Los Alamos, we worked on simulating the theory on a computer rather than using the traditional analytic mathematical techniques of the time.

You were involved with the Superconducting Super Collider, or SSC, a Texas-based particle smasher that would have dwarfed the Large Hadron Collider. That experience helped drive you out of particle physics. What went wrong?

The U.S. government pulled the plug on the SSC in 1993, after $3 billion had already gone in. I was feeling disillusioned, and I was getting old. I come from a working-class family where the men die young. I had always assumed that I would probably not live much beyond about 60, and I was getting into my midfifties. Somewhere, I thought, there should be a formula, based on the underlying principles of how life works, that would let you calculate the life span of a human being.

How did you start on your quest to understand lifespan? I started to look into metabolism, the chemical processes that sustain life. And what I found were these amazing scaling laws that had been discovered in biology 60 years earlier for which there were no accepted explanations. A scaling law basically represents how various measurements in a system—say, the bodies of mammals—change proportionally as size changes. The first and most famous scaling law is something called Kleiber’s law, which describes how metabolic rate, the amount of energy you need per day to stay alive, is related to an organism’s size. It turns out that metabolic rate [r] is just the mass [M] of the organism raised to the three-quarters power [

r ≈ M^¾

]. A whale, for instance, weighs about 100 million times more than a shrew. You might expect its metabolic rate to be 100 million times greater, too. But it’s only a million times bigger, because metabolic rate scales as mass to the three-quarters [100,000,000^¾ is 1,000,000]. The pattern holds with very few exceptions across all organisms. That struck me as extraordinary.

And Kleiber’s law is just the beginning. What else did you find about how biological systems scale? On further looking around, I learned that there were all these scaling laws that people had extracted from biological data. There was one law showing that heart rate decreases as mass raised to the one-quarter power, meaning larger animals have predictably slower heartbeats than small animals. Let’s take a whale and a shrew again. The whale is a hundred million times bigger than a shrew, but its heart rate is just a hundred times slower. There was another law showing that life span increases as mass raised to approximately one-quarter, which translates into larger animals having a longer life span than smaller animals. These two laws together say, essentially, that there are the same number of heartbeats in your lifetime whether you are a shrew or a whale. It gives rise to the idea that big animals live very long but very slowly, and little ones live very fast but over a very short period of time.

In 1996 you extended those ideas about the patterns of life at the Santa Fe Institute, where you collaborated with biologists for 15 years. How did that come about and what did you find?

One day I got a call from Mike Simmons, the vice president at the Santa Fe Institute. He brought me together with Jim Brown, a well-known biologist at the University of New Mexico who, in a fantastic coincidence, was looking for a physicist to work with on biological scaling. Along with Brown’s student Brian Enquist, we met informally here at the institute at 9 a.m. every Friday, and talked about finding an underlying theory for the scale laws. Ultimately we built a mathematical model of the mammalian circulatory system from scratch, working from basic physical laws that described networks, flow, and so on. When we put all those rules together, we determined that the blood flow rate through any mammal’s aorta scales as mass to three-quarters. That allows us to predict the blood flow rate of a mammal just by knowing its size. And the blood flow rate through the aorta defines the metabolic rate, because it’s what carries the oxygen. In other words, our mathematical model gave us Kleiber’s law.

So what did your mathematical model tell you about how real mammals are constructed? That the scaling laws they follow are the natural, emergent outgrowth of networks—in this case, a circulatory network—that are constructed according to basic sets of rules.

Do these laws work in other life forms besides animals? Yes. For instance, we extended scaling laws to plants and trees. We found that the number of branches scales to the radius of the tree trunk, which tells us that even the generic geometry of trees obeys scaling laws. When you walk through a forest, you just see this mess. Trees look like random conglomerations of branches. But in fact there’s unbelievable structure there. And these equations describe it.

In 2003 you started studying cities. What led you there? Cities are obvious metaphors for life. We call roads “arteries” and so forth. But more importantly, they are our unique creations. Santa Fe feels unique, New York City feels unique. They have their own culture, history, and geography. They have their own planners, politicians, and architects. Yet when my collaborators and I looked at tremendous amounts of data about cities, we found universal scaling laws again. Each city is not so unique after all. If you look at any infrastructural quantity—the number of gas stations, the surface area of the roads, the length of electric cables—it always scales as the population of the city raised to approximately the 0.85 power.

So even without planning it, every city’s infrastructure follows the same mathematical pattern? How can that be? The bigger the city is, the less infrastructure you need per capita. That law seems to be the same in all of the data we can get at. It is a really interesting relationship, and it’s very reminiscent of scaling laws in biology. However, when we looked at socioeconomic quantities—quantities that have no analogue in biology, like wages, patents produced, crime, number of police, et cetera—we found that unlike everything we’d seen in biology, cities scale in a superlinear fashion: The exponent was bigger than 1, about 1.15. That means that when you double the size of the city, you get more than double the amount of both good and bad socioeconomic quantities—patents, aids cases, wages, crime, and so on.

Despite all the efforts of planners, architects, and politicians, cities somehow obey scaling laws.”

And those laws apply to all cities, regardless of location? This scaling seems to be true across the globe, no matter where you are. I think that what’s responsible for it is the hierarchical nature of human relationships. First of all, you cluster in a family. On average, an individual doesn’t have a powerful connection with more than four to six people, and that’s just as true here in the U.S. as it is in China. Then there are clusters of families, and then larger clusters that form neighborhoods, and so on, all the way up. The structure of this network of relationships could be analogous to the behavior of the networks of blood vessels in the body. They could be the universal thing holding the city together.

Does your discovery have practical implications for urban planning? You tell me the size of any city in the United States and I can tell you with 80 to 90 percent accuracy almost everything about it. The scaling laws tell you that despite all of the efforts of planners, geographers, economists, architects, and politicians, and all of the local history, geography, and culture, somehow cities end up having to obey these scaling laws. We need to be aware of those forces when we design and redesign cities.

Can your insights about the scaling laws of cities help us understand the impact of population growth and urban migration? I believe that part of what has made life on Earth so unbelievably resilient—able to evolve and survive across billions of years—is the fact that its growth is generally sublinear, with the exponents smaller than 1. Because of that, organisms evolve over generations rather than within their own lifetimes, and such gradual change is incredibly stable. But human population growth and our use of resources are both growing superlinearly, and that is potentially unstable.

Meaning that our consumption of resources can’t keep growing forever? Right. Our theory suggests we will face something mathematicians call a “finite time singularity.” Equations with superlinear behavior, rather than leveling out like the sublinear ones in biology, go to infinity in a finite time. But that’s impossible, because you’re going to run out of finite resources. The equations tell us that when you reach this point, the system stagnates and collapses.

If your interpretation of population growth is true, why haven’t cities already collapsed? The growth equation was derived with certain conditions that are determined by the cultural innovation that dominates each historic period: iron, computers, whatever it is. An innovation that changes everything—like a new fuel—resets the clock, so you can avoid the singularity a bit longer. But the theory says that to avoid the singularity, these innovations have to keep coming faster and faster.

What are the issues most likely to push us toward collapse? I think the biggest stresses are clearly going to be on energy, food, and clean water. A lot of people are going to be denied these basics across the globe. If there is a collapse—and I hope I’m wrong—it will almost certainly come from social unrest starting in the most deprived areas, which will spread to the developed world.

How can we prevent that kind of collapse from happening? We need to seriously rethink our socioeconomic framework. It will be a huge social and political challenge, but we have to move to an economy based on no growth or limited growth. And we need to bring together economists, scientists, and politicians to devise a strategy for doing what has to be done. I think there is a way out of this, but I’m afraid we might not have time to find it.

That sounds similar to the dire warnings of economist Thomas Malthus in the 19th century and biologist Paul Ehrlich in the 1960s. Those predictions proved spectacularly wrong. How is yours different?

I’ve been called a neo-Malthusian as if it’s a horrible word, but I’m proud to be one. Ehrlich and Malthus were wrong because they didn’t take into account innovation and technological change. But the spirit was correct, and it is unfortunate that people dismiss their arguments outright. Even though innovations reset the clock, from the work that I’ve done, I think all they do is delay collapse.