This story was originally published in our May/June 2023 issue as "Making Waves." Click here to subscribe to read more stories like this one.
On Feb. 11, 2016, scientists at the Laser Interferometer Gravitational-wave Observatory (LIGO) unveiled the first direct detection of gravitational waves — produced, in this case, by the merger of two black holes, 1.3 billion light-years away. The announcement (and accompanying scientific paper) came 100 years after Albert Einstein’s 1916 prediction that such waves would be unleashed during violent events in the universe. Based on his new theory of general relativity, Einstein concluded that gravitational waves would form when massive objects accelerated through spacetime, creating outward spreading ripples as they moved, just like the wake that follows a powerboat speeding across once-tranquil water.
Three physicists — Barry Barish, Kip Thorne, and Rainer Weiss — received the 2017 Nobel Prize in Physics for their “decisive contributions to the LIGO detector and the observation of gravitational waves.” Of course, the efforts of many other researchers were crucial to the milestone achieved in 2016, including those of Princeton University physicist Frans Pretorius, an unsung hero in this saga.
The importance of Pretorius’ contributions stems from the way in which gravitational-wave astronomy differs from almost every other branch of observational science. Experimentalists could not go straight from Einstein’s original predictions about gravitational radiation to building an apparatus capable of detecting a passing wave.
In between theory and observation lies the essential realm of “numerical relativity”: using supercomputers to solve the equations of general relativity in order to describe the gravitational waves that would be created when two black holes of given masses, orbital velocities and rotational rates collide and eventually become one.
The cataclysmic union is far too complex to solve with paper and pencil, which is why computers must be enlisted. The field equations of general relativity consist of 10 nonlinear differential equations that must be solved simultaneously — a task so demanding that exact solutions have only been obtained in a handful of special cases. “Numerical solutions are approximate,” Pretorius explains, “but they can be made as close to exact as one has computational resources to throw at the problem.”
An early and central goal of numerical relativity — which gained added urgency in 1990 when the National Science Foundation approved LIGO’s construction — was to model a black hole collision and figure out the resulting gravitational “waveform,” or shape of the waves produced throughout the entire interaction, as well as the amplitude and frequency of those waves. Over time, LIGO investigators have built up a library of solutions, or “templates,” which they try to match against detected signals. This critical resource helps researchers know what to look for and to interpret what they are seeing.
But the physicists and computer scientists pursuing this challenge were held back by a host of obstacles — that is, until Pretorius made a huge breakthrough in 2005. He carried out the first successful simulation of a black hole merger, while simultaneously computing the angular momentum of the combined black hole and determining that 5 percent of the system’s initial mass would be radiated away as gravitational waves.
Pretorius accomplished his “goal of connecting the theoretical [field equations of general relativity] with the experimental [gravitational wave astronomy],” comments Lydia Patton, a philosopher and science historian at Virginia Tech. That, in turn, “made it possible to simulate [gravitational] signals from distant systems … under the assumption that general relativity is correct.” In Patton’s opinion, Pretorius could and should have been considered for the Nobel Prize, if the award were ever given to more than three scientists.
Pretorius spoke with Discover about his specialty, numerical relativity, and its importance to gravitational-wave astronomy — a field that is opening up a whole new window on the universe.
Q: Gravitational-wave astronomy was barely a field when you began your studies in science, so how did you get into it?
FP: I entered Southern Oregon University in 1989, starting out as a physics-math major and then changing to computer science. In my third year, I transferred to the University of Victoria and switched to computer engineering. After I finished my engineering degree in 1996, I took a course in general relativity — and loved it. That was the best course I’d ever taken, and I decided to pursue physics in graduate school, after first spending a year on catch-up courses.
I got my master’s in physics in 1999 and then started working toward my Ph.D. at the University of British Columbia [UBC] under the supervision of Matthew Choptuik, a numerical relativist. I had been regretting how much time I’d wasted on computers, but in hindsight, that experience has helped me tremendously in numerical relativity. All those things that I studied fit together, even though it wasn’t planned. Luck can play a big role in scientific careers.
Q: What were the biggest challenges confronting numerical relativists when you started out in the late 1990s?
FP: The idea was to write these computer codes that solve general relativity’s field equations by doing billions of operations on CPUs, but the programs were unstable. When you tried to model two black holes moving toward each other, very soon after you started, some “illegal” things would happen — like dividing by zero or going outside the allowable range of numbers into infinity — and the code would crash. The predominant view was that the problem stemmed from picking the wrong coordinate system, where a coordinate system is simply the grid you use to map out spacetime.
In the early 1950s, the mathematician Yvonne Choquet-Bruhat proved that in at least one special coordinate system, called harmonic coordinates, the Einstein equations would be “mathematically sensible.” Harmonic coordinates are sometimes called “wave coordinates” because they adapt to passing waves. Imagine dividing a pond into a grid and putting a rubber duck in each grid section. If there’s no wave, the duck just sits there. When a wave moves past, the ducks will bob up and down. Even though the ducks are moving, they hold a fixed position with respect to the coordinates.
Saying that the equations are mathematically sensible when written in harmonic coordinates relates to their potential predictability. If we know the initial conditions of a system at some time, can the theory tell us what it will be at a later time? To do so, it has to have a sensible mathematical formulation. The equations are not well-behaved, or “well-posed,” in all coordinate systems, but Choquet-Bruhat found a coordinate system in which they are. The trouble is, people in numerical relativity ignored her work. And there were several conjectures floating around that said harmonic coordinates would be bad for modeling gravitational waves. That misunderstanding prevented progress for a long time.
Q: How did you make use of Choquet-Bruhat’s finding?
FP: I’d read a 2001 paper by the physicist David Garfinkle, who’d been using generalized harmonic coordinates — a modified version of the harmonic coordinates used by Choquet-Bruhat — to understand the kind of singularities expected to form in the middle of black holes. Singularities are places where the curvature of spacetime and the density of matter become infinite, causing the general relativity equations to go haywire.
Generalized harmonic coordinates are like harmonic coordinates except that you add something called “forcing functions” that give the coordinates a greater ability to adapt. Adding these functions doesn’t change your solution to the Einstein equations; it just changes how you map out the spacetime. I decided to apply generalized harmonic coordinates first to the formation of a single black hole and then to the case of two merging black holes. Choosing those coordinates turned out to be a key step toward resolving the instability problem.
To explain this a bit more, let’s go back to our rubber ducks: It’s fine if they move up and down when a wave goes by. But if a large wave breaks and pushes the ducks together, that creates a singularity and the Einstein equations break down. The forcing function is a “legal” way of keeping the ducks from touching. Suppose the ducks are floating down a stream and the stream narrows; a forcing function can keep them from touching. By rearranging the ducks a little bit in this way, we aren’t changing the surface, just how we sample it. That’s analogous to a coordinate change, which does not change the physics in a material way.
But if there’s a waterfall ahead, the ducks will pile up on top of each other. That’s a true singularity, and the forcing functions can’t help you in that situation.
Q: How did you cope with the problem of singularities when two black holes combine and their central singularities combine, too?
FP: It’s true, there is a physical singularity in a black hole, and you have to deal with that in some way. I made use of an “excision” technique pioneered by UBC physicist William Unruh. These singularities, as I’ve said, are like true waterfalls. They’re not just coordinate problems, and the computer code can’t handle that. But you are allowed to cut them out if they’re hidden behind an event horizon — an invisible boundary surrounding the black hole. Nothing inside this boundary, including light, can escape. Consequently, with black holes, nothing can get outside to pollute the [gravitational wave] signal you are trying to compute.
If we remove these singularities from the conversation, just cut them out, it won’t mess up the calculations outside the black hole. And that’s what I did.
Q: You made another innovation related to the so-called constraint equations. How do they come into play?
FP: Constraint equations are a kind of hidden structure built into the Einstein equations. The idea is that if you start out with some initial conditions and let the equations evolve into the future, you will still get a solution to these equations. You can think of it this way: If you put a marble on top of a hill, it can roll down all kinds of ways, but the constraints might tell you that only one of those paths — such as right along the ridgeline — is an actual solution to the Einstein equations.
That had been a problem in numerical simulations. A slight numerical error can create a tiny jiggle, causing the marble to fall off the ridge. When that happens, the code crashes. My strategy was to change the landscape and convert the ridge into a valley. That way, the shape of the surrounding space helps keep things stable.
If you get bumped by a little bit on a ridge, you fall off. Game over. But if you get bumped a little in the valley, it doesn’t change things very much. It sounds like a trick, almost too good to be true, because you are kind of messing with parts of the equation. But at the end of the day, you’ve managed to stay along the ridgeline, which means you still have the right solution. The fact that you changed the landscape doesn’t matter so long as you haven’t changed the ridgeline itself.
Q: How did all these pieces come together?
FP: The key ingredients I assembled included Choquet-Bruhat’s harmonic coordinates along with Garfinkle’s forcing functions, the excision technique of Unruh, and the idea of converting the ridge to a valley, which was suggested to me by the mathematical physicist Carsten Gundlach. At first, I looked at how well single black holes would behave under these circumstances. When that was completely stable, I went after the binary black hole problem. That took another six months — or about three years of work altogether until the time of my first stable merger simulation in 2005.
The paper that came out that year just looked at the final orbit of the two black holes before the merger. I decided to include more orbits in my simulations. In a 2006 paper with Alessandra Buonanno and Greg Cook, we pushed it to four orbits.
By then others in the [numerical relativity] community took it up, extending these simulations to ever more orbits, while increasing their accuracy as well by throwing more computing power at the problem. At that point it had become more of an engineering problem: Now that we have the method, people said, let’s extract more and more information. But rather than continuing to pursue more detailed simulations, I started investigating some new, though related, scientific questions.
Q: What are some of the new scientific questions you’re currently looking at?
FP: I’m interested in black hole binaries with high eccentricity — objects that follow highly elliptical orbits. Normally, when two stellar-sized black holes merge, their orbits become almost exactly circular near the end. You can picture that as two centers of mass, orbiting around a common point, on opposite sides of the same circle. Extreme eccentricity could arise in a “triple system” when you have a third compact object orbiting a black hole binary. It might also be seen in dense globular clusters populated by thousands of black holes. Two of them could be so close together that they wouldn’t have a chance to circularize before they collide. The resulting gravitational wave signal would be very different from what LIGO is looking for at present. Carrying out such an analysis would be very computationally expensive and well beyond current capabilities in numerical relativity.
Q: Are you thinking about compact objects other than black holes and neutron stars, which have already been detected at LIGO?
FP: Yes, I do wonder if there might be exotic things out there like wormholes and gravastars [compact objects without an event horizon] and, if they existed, what their gravitational wave signal would look like when they collide. We’re probing the universe with a new tool, so let’s see if we can find something beyond what we’d expect from conventional physics.