We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Big G

Ever since Isaac Newton watched an apple fall to the ground, scientists have taken gravity for granted. Until, that is, they tried to measure its strength with high-tech precision. Their results were so incredibly far off as to be newsworthy.

By Hans Christian Von Baeyer, Gary Tanhauser, and Klaus Schonwiese
Mar 1, 1996 6:00 AMNov 12, 2019 5:59 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

Last April, at a meeting of the American physical society in Washington, D.C., representatives of three independent laboratories announced new high-precision measurements of the strength of the force of gravity. To the astonishment of the audience, the three measurements disagreed with one another by considerable amounts, and worse, none of them matched the value that physicists have accepted as correct for more than a decade. No one could offer so much as a hint to explain the discrepancies.

To illustrate the magnitude of the predicament, imagine a felon hunted by the police. They know that he is hiding somewhere along a street of ten blocks, with ten houses on each block. On the basis of previous information, the police have concentrated their surveillance on a particular house in the middle of the second block, when suddenly three new and presumably trustworthy witnesses appear. One places the miscreant in the very first house of the first block, the second singles out a dwelling near the end of the first block, while the third witness points to a house way across town at the other end of the street, more than eight blocks from the stakeout.


Experiments to measure G are painfully sensitive to every stray gravitational influence, from sparrows flying over the roof to earthquakes in the antipodes.


What are the cops to do? Go with the majority and move their operation over to the first block? Take an average and wait somewhere in the third block? Try to pick the most reliable witness and concentrate on a single house? Stretch their net to cover the entire ten-block street? Or stay put, discounting the new reports because they contradict one another? Physicists trying to make sense of the new measurements are facing the same unsatisfactory choices.

The goal of the measurements is easy to understand. According to Isaac Newton, any two material objects in the universe attract each other with a force that is proportional to the mass of the objects and that diminishes with their distance from each other. To quantify this phenomenon, physicists define as G the magnitude of the attraction that two one-kilogram masses, exactly one meter apart, exert on each other. Strictly speaking, G is an odd quantity with no intuitive meaning, so for this reason physicists take the liberty of referring to it in more familiar terms as a force. In this case, the value of G is 15.0013 millionths of a millionth of a pound. (G is not to be confused with g, the acceleration of gravity near the surface of Earth, or with the g-force, the effect of an acceleration on a body.)

The conceptual simplicity of measuring the strength of gravity contrasts sharply with the practical difficulty of carrying it out. There are two fundamental reasons for the elusiveness of G. For one thing, gravity is pathetically feeble. If the two chunks of matter were ten times closer, or about four inches apart, the force, though it would rise to 100 G, would still amount to no more than about a billionth of a pound—the weight of an average E. coli bacterium.

The other, more subtle, problem is that gravity, unlike all the other forces of nature, cannot be shielded. Electricity and magnetism, for example, which keep molecules from disintegrating, can be neutralized. Positive charges cancel negative charges, south poles offset north poles. Shielding makes it possible to insulate electrical conductors so they can be handled safely, even if they carry 220 lethal volts, and for the same reason, radios, which feed on electromagnetic radiation, fade in highway tunnels. No such shielding is available for gravity, and hence experiments to measure G are painfully sensitive to every stray gravitational influence, from sparrows flying over the laboratory roof to earthquakes in the antipodes.

Newton, who formulated the universal law of gravity and used it to explain a wealth of phenomena, including the orbits of planets, the tides of the ocean, and the flattening of Earth at its poles, did not need to know the value of G. Nor, for that matter, do NASA engineers who plot the paths of space probes with breathtaking precision. Most applications of the theory of gravity depend only on relative values, such as the ratio of the acceleration of the moon to that of an apple, which can be determined with much greater precision than the absolute value of G.

The first accurate measurement of G was not made, in fact, until 1797, more than a century after the discovery of the law of gravity, and it arose from a classic experiment performed by the English nobleman Henry Cavendish. Cavendish was an eccentric. Although he was said to be “the richest of all learned men, and very likely also the most learned of all the rich,” he lived frugally, spending his wealth only on books and scientific equipment. Morbidly taciturn and pathologically reclusive, he was such a confirmed misogynist that he communicated with his female housekeeper only by written notes.

Winfried Michaelis’s group in Brunswick, Germany, creates an electric field by means of two electrostatic generators to hold one end of a crossbar in place while the other end (not shown) is subjected to a minute gravitational tug. the crossbar floats on a pool of mercury.

Yet for all his bizarre behavior, Cavendish was one of the most original and productive scientists of his generation. The ingenious device he employed for measuring G, called a torsion balance, had been built by the clergyman and amateur naturalist John Michell and was invented simultaneously by the French electrical pioneer Charles Coulomb, but in Cavendish’s skillful hands it revolutionized the science of precision measurements. Almost all of the hundreds of subsequent determinations of G have used the torsion balance. Furthermore, it has been adapted for countless other applications, such as seismological measurements and electrical calibration—wherever precise control over very small forces is called for.

The conceptual basis of the torsion balance is the observation that it doesn’t take much force to induce a twist, or torsion, in a long, thin wire hanging from the ceiling. (A hanged man twists even in a faint breeze.) If a horizontal crossbar is hung from the lower end of the wire, in the manner of a rod in a mobile, it can serve as a pointer for indicating the angle through which the wire has been twisted. Once such a torsion balance has been calibrated, it becomes a measuring device for minuscule forces applied to one end of the crossbar: a small horizontal push results in a sizable angle of twist.

Cavendish attached a small lead ball to one end of the crossbar, brought an enormous weight on a fixed support to a point slightly in front of the ball, and then watched the wire twist as the ball was attracted to the weight. (Actually, to balance his apparatus, he placed identical balls at both ends of the crossbar, dumbbell fashion, and doubled the attraction by mounting two large weights symmetrically as close to the balls as he could get without their touching.) By measuring the minute twist induced in the wire in this contrivance, Cavendish read off the actual force that caused it. From this, and the measured dimensions of the apparatus, he was able to deduce the value of G by means of simple proportions. The result was in the ballpark of the modern value, but what a huge ballpark it was. Cavendish estimated his precision at about 7 percent, which translates into locating the fugitive felon somewhere within the span of 100 blocks.

Modern measurements are almost a thousand times better, pinning the culprit down to a specific house (although the disagreements among the new results take the bloom off this achievement). But the uncertainty in the value of G remains astronomical by today’s exacting standards. Historically, G was the first universal constant of physics, and ironically it is by a wide margin the least well known. Modern physics is built on such numbers as the speed of light (c); the charge of an electron (e); and the quantum of action (h), which determines the sizes of atoms. Some of these constants have been measured to within one part in 100 million, others to a few parts per million. Our ignorance of G, compared with all of them, shocks by its crudeness.

The constants c, e, and h are entangled with one another in a tight web of interconnections that spans the microworld, in the sense that all measurements of atomic and nuclear properties must ultimately be expressed in terms of these and a small handful of other numbers. Such entanglement entails a complex system of cross-checks and mutual constraints that help fix the fundamental constants with impressive precision. Unfortunately, G does not participate in any of these relationships, because gravity plays no role in the atom. The gravitational attraction among atomic constituents is 30 or 40 orders of magnitude weaker than the competing electrical and nuclear forces and is thus completely irrelevant. In the end G stands naked and aloof, the ancient, unapproachable king of the fundamental constants.

So why not just leave it alone? Why do scientists devote their energies and careers to a better determination of G instead of pursuing more profitable ends? Currently there is no practical value in knowing its magnitude. Neither astronomy nor geology nor space exploration would benefit from a new measurement, so these are not motivations for the new experiments. Instead scientists want to measure G as a matter of principle—just because it’s there. And that is how science progresses. In the late nineteenth century astronomers struggled to tease out a tiny anomaly in the orbit of Mercury—an aberration that would never affect a calendar or the prediction of an eclipse. They measured it just because it was there, with no inkling that it would soon emerge as the sole experimental anchor of a revolutionary new conception of space and time—the general theory of relativity.

By the same token, the value of G may suddenly vault into the limelight. These days physicists are talking about a theory of everything, an ambitious scheme that would unite the description of all forces and particles into one seamless overarching framework. And if they find it, it will yield connections between c, e, h, and G that will serve as tests of the theory. In such tests the venerable old G threatens to be the weak link, unless we learn to know it better. The fear of this eventuality— and not coming up with some gadget for the marketplace—is what inspires the G hunters.

Two of their measurements are actually refinements of Cavendish’s experiment with the torsion balance. Mark Fitzgerald and Tim Armstrong at the Measurement Standards Laboratory in Lower Hutt, New Zealand, counterbalanced the gravitational attraction in their apparatus with a delicate electrostatic repulsion, which they in turn measured precisely. In this way the crossbar did not even have to move, so oscillations and uncertainties in distance measurements were reduced to a minimum. Their result was the lowest of the three recent measurements, and it is represented by the first house on the street.

The house at the far end symbolizes the torsion balance experiment of Winfried Michaelis and his group at the Physical-Technical Institute in Brunswick, Germany. Their device went even further than the one in New Zealand to compensate for external influences. Not only was the gravitational attraction of the test balls counteracted electrically, but also Earth’s vertical pull was nullified by floating the balls on liquid mercury. Although the Brunswick physicists cannot account for the huge discrepancy between their result and the others, they are confident of the integrity of their work and staunchly defend it.

Hinrich Meyer’s group in Wuppertal, Germany, determined G by measuring the distance separating two pendulums as the gravitational attraction of two half-ton weights (yellow) pulled them apart.

The third experiment was different: Hinrich Meyer’s group at the University of Wuppertal, Germany, decided to strike out in a new direction. Instead of a torsion balance, they used a novel arrangement for generating and measuring small forces. If a pendulum hangs straight down, it takes a horizontal force on the bob to set it swinging. But the push required to give it the very first minute sideways nudge is exceedingly small. (Consequently hanged men swing in a gentle breeze, even as they twist.) This push is related in a simple way to the horizontal displacement of the bob and again allows the accurate measurement of a small distance to be translated into a precise determination of force. Small distances, it turns out, can in fact be measured with high accuracy.

In Meyer’s lab two long pendulums hang side by side from the ceiling, nine inches apart. Their bobs are shaped into smooth metallic mirrors facing each other. A radio signal bouncing back and forth between the mirrors furnishes a reliable measurement of the distance between them. Then two half-ton masses are wheeled up to the outside of the apparatus, each close to one mirror. Each attracts the near mirror a bit more than the far one, so the distance between the bobs changes, and that change, in turn, is translated into a determination of G.

The principal advantage of the Wuppertal method is simply that it is new and therefore an independent check on the accuracy of the torsion- pendulum technique. Furthermore, since the big external weights can be moved far from the mirrors, the experiment verifies the dependence of Newton’s law on distance. This flexibility gives Meyer added confidence in the accuracy of his method.

But he is not happy. No metrologist, as precision measurement scientists like to call themselves, is ever entirely happy. There are always nagging questions: “What disturbing effect have I left out? What subtle correction factor must I apply to my data? How can I improve my precision?” These questions become all the more haunting when there is no guidance from theory. When astronomers verify Einstein’s general theory of relativity by measuring minute gravitational phenomena in the heavens, they have a target to shoot for, a number that most physicists expect to find corroborated. If they measure something different, they hunt and fiddle and tinker until they either reach Einstein’s value or give up exhausted. The hapless G hunters lack such a target. There is no theoretical prediction for what they should expect; all they can go by is the historical record of previous measurements, which may or may not be reliable. Collectively they suffer from a mild form of anomie, a feeling that they lack laws and rules to follow, a sense of uprootedness.

The Wuppertal team had more than its share of worries about external perturbations. The most troubling of these were vibrations of the entire apparatus that resulted in spurious microscopic variations in the distance between the two mirrors. It was easy to discount the “noise” in the data that peaked at noon and dropped to a minimum at 3 A.M.: that was Wuppertal traffic, easily eliminated by working at night. A strange 12-hour cycle of oscillations that remained after the local traffic had been accounted for was attributed to minute Earth movements caused by the tides in the North Sea, 150 miles away. But a burst of perturbations that had no relation to traffic or tides turned out to be a harder nut to crack. It wasn’t until the physicists tackled the intricacies of seismology, a discipline far removed from their training, that they were able to sort out the effects of tremors on their exquisitely sensitive machine. In the end, they had to discard a data set because they found out that it was spoiled by minor earthquakes in Japan, half a world away.

By far the most difficult task of the metrologist is to decide when to quit. Meyer’s group, for example, had already published a preliminary result when they realized that they had improperly accounted for the gravitational attraction of the suspending tungsten wires, each one with a diameter of .2 millimeter. This change moved their value of G out of the conventional range into the first block of the street, as it were, but of course it had to be reported. How many more such improvements could they think of if they delayed long enough? Where would their value end up? When would their experiment end?

To account in some way for additional corrections that may have been left out, physicists usually quote numerical results of measurements together with an estimate of the possible error—a range of values rather than a single number. In the fugitive-criminal analogy, the error is represented by the width of a house: the felon is in there somewhere, but we don’t know in exactly which spot. Error consists of two pieces. The first, called statistical error, simply represents the spread of answers you actually get when you repeat any measurement many times in order to report the average. That’s the easy part.

The second piece, called the systematic error, is not much more than a guess about other possible variations in the result. For example, if you take your height with a metal ruler, you might account for the metal’s expanding when it is heated. You don’t know at what temperature the ruler is supposed to be read, but you know that it couldn’t have changed more than a hundredth of an inch in hot weather. So you add that to your list of systematic errors. Metrologists use the systematic error as protection against unknown effects, but it doesn’t help much.

Helmut Piel, the initiator of the Wuppertal project, notes that by its very nature systematic error is usually underestimated. “If I knew what my systematic error was,” he says, chuckling, “I would eliminate it.” His relaxed attitude echoes that of an earlier metrologist. At the end of his famous report, Henry Cavendish said of another estimate of G, which deviated from his by 22 percent, well outside his estimated statistical error of plus or minus 7 percent: “[It] differs rather more from the preceding determination than I should have expected. But I forbear entering into any consideration of which determination is most to be depended on, till I have examined more carefully how much the preceding determination is affected by irregularities whose quantity I cannot measure.” Since he had not yet worked out an estimate of his own systematic error, he resorted to the customary optimism of the metrologist and reported none.

Virtually all papers on determinations of G end in the same way—with a call for more work, either in the form of refinements of existing efforts, or of radical new approaches. In many laboratories throughout the world, including the United States, new attacks on G are under way. In addition, there are plans to measure the strength of gravity in outer space, far from traffic, tides, and earthquakes, where unprecedented levels of precision appear to be attainable.

In the meantime, the three most recent measurements have thrown the study of G into confusion. But that is as it should be. “We will never understand anything until we have found some contradictions,” claimed Niels Bohr, the father of quantum mechanics. Science, in other words, thrives on anomaly, inconsistency, controversy, and doubt. Certainty kills it.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.