Science has long been based on the notion that law and order rule the universe. When primitive people looked at the sky, they could make sense of what they saw only by attributing it to the whims of powerful gods. But in the sixteenth century the German astronomer Johannes Kepler reduced the motion of the planets to three simple laws that guided them along elliptical orbits. His work led Isaac Newton to discover a law of gravitation that applied to any object in the universe. The universe, scientists subsequently assumed, is a predictable, clockwork system. Some parts are more complex than others--the workings of a swirling galaxy, for example, are rather more intricate than those of a pendulum--but directing even the most complex part is the same rule of order, though it may be imperceptible to our limited brain power. Simple causes always produce simple effects, the reasoning went, and complexity must result either from complicated rules or from the interaction of large numbers of things. Thus the simple geometric shape of a planet’s orbit--an ellipse--was seen as a direct consequence of the simplicity of the law of gravity, while the complexity of the DNA molecule was considered a consequence of the huge number of ways in which its atoms could be arranged. Imagine Newton’s and Kepler’s dismay if they could have read the July 3 issue of the journal Science this year. In it Gerald Sussman and Jack Wisdom, a computer scientist and an astronomer, respectively, at MIT, announced that the entire solar system is unpredictable. Without an infinitely precise knowledge of the location and velocity of the planets at any given moment, our Newtonian calculations will be completely wrong after a mere 4 million years. Such startling findings about the changeable nature of the universe are appearing more and more frequently at the frontiers of today’s mathematics. We now know that rigid, predetermined, simple laws can lead not only to predictable, everlasting pattern but also to behavior so complex and irregular that it appears to all intents and purposes random. This phenomenon is called chaos. Chaos raises some fundamental questions about the universe: Since order can generate chaos as well as pattern, what is the role of natural law? Is it chaos, not order, that rules the universe? And where do nature’s complex patterns come from, if not from simple laws? At the moment scientists have only begun to ask these questions, and so answers are a long way off. For now they must be content to try to investigate chaos in many different phenomena. Ecologists, biologists, astronomers, chemists, economists, physicists--all have found chaos in their own disciplines. In each case chaos seems to be pointing toward a new understanding of how complexity and patterns arise. Chaos may help explain the evolution of life on Earth. It may shed light on the age-old question of time’s arrow. It may even let us unlock the baffling mystery of subatomic physics. And at some point, all these individual efforts may come back together into a single science of change. Chaos was first uncovered in the field of dynamics, which grew out of Newton’s laws of motion and gravitation. Dynamics studies how systems change over time. (A system, by definition, is a group of bodies that are all subject to the same forces.) Traditionally researchers approached dynamics quantitatively. The state of the system--say, the positions of the planets and their orbiting speed and direction--was given in numerical values, and the job of the dynamicist was to calculate how those numbers change over time. Now, however, the approach has become qualitative--researchers are looking to say things about the general features of the system. Thus Sussman and Wisdom, instead of saying, Pluto will be at location X in 10 million years, were essentially saying, The solar system exhibits chaos. Their approach succeeds thanks to an old trick invented by René Descartes, one that they’ve stolen and run backward. Descartes discovered how to turn geometry into numbers by assigning coordinates to points in space. Thus a square can be said to be the area inside the points (0,0), (1,0), (1,1), and (0,1) on a two-dimensional graph. Today we do the reverse: we turn numbers into geometry by pretending that they are coordinates in an imaginary space called phase space. Thus the interaction between a population of foxes and the population of rabbits they prey on might be represented by a dynamic system whose two variables are the number of foxes and the number of rabbits. If we think of those two numbers as coordinates on a two- dimensional graph, we can use them to draw pictures of how the two populations vary. Add a third variable to this ecological model--say, the juniper bushes the rabbits feed on--and you need three dimensions to draw the picture. A more complicated ecological model, with seven different species interacting, similarly determines a seven-dimensional phase space, and so on. Over time the populations of foxes and rabbits rise and fall, and so the point that represents them in phase space moves around. In this way, dynamics changes from long lists of numbers to the motion of points flowing through an appropriate phase space. Often points at great distances from one another all home in on particular regions of the phase space. These regions turn out to have structured geometric forms, and since they attract points, they’re called attractors. Attractors embody the long-term qualitative behavior of a system. If a system changes in simple ways, its attractor is a simple geometric object. For example, a system that does nothing at all is represented as a single fixed point. A system that repeats the same behavior periodically traces a closed loop, since it always comes back to the same place in phase space. Surprisingly, however, a system built on simple processes--three objects floating around each other in space, for instance--can also generate far more complex forms, which dynamicists call strange attractors. And they are indeed strange: often they look like milk swirling around on the top of a cup of coffee. They are so convoluted, twisted, and folded in on themselves that a point traveling along one of them seems to be moving completely randomly. But it is important to remember that both traditional attractors and strange attractors are generated by simple processes. They are equally common, and given the right kinds of nudges, most dynamic systems can be persuaded to exhibit order or chaos. Chaos, being ubiquitous, strikes at the heart of what we think of as nature’s laws, with their safe, predictable consequences. Though simple rules may govern individual atoms, nevertheless the behavior prescribed by those rules may well be chaotic. That implies that nature’s laws can’t be responsible for the simplicity and order you encounter in your daily life. Chairs, dogs, houses--these are simple in the sense that we recognize them as entities and have a good idea how they will behave and how we should relate to them. But a dog is made from an inordinately large number of atoms. How does a complex system of atoms know about the big simplicities of wagging tails and chasing cats? How can organized large- scale structures function when the atoms that make them up are swimming in an ocean of chaos? How can stable patterns of behavior arise in a fundamentally chaotic world? To understand how chaos--and the questions chaos raises--affects your everyday life, consider the seemingly simple matter of your heart. Traditional science treats it as if it were a pump beating like clockwork, whose complicated cycles can be broken down into a number of simple waves of standard shapes. Real hearts are far more puzzling. Your heartbeat is triggered by signals from your brain, but the actual rhythmic contractions are the result of a democratic vote by millions of muscle fibers, all agreeing to contract in synchrony. Such a system is obviously far from clockwork. The rhythm of your heartbeat continually varies by tiny but measurable amounts. It’s not a variability imposed from the outside; even when your body is at rest, your heartbeat fluctuates. It is caused by chaotic internal dynamics. Such grand simplicity emerging from the complexity of fine detail at lower levels is a paradox that occurs on all levels of life. How does a heartbeat rhythm emerge from a mob of cells? How does the organized form of a living being emerge from the chaotic motion of its constituent cells and chemicals? How can a group of general characteristics of an animal--everything that goes into the label carnivore, for instance-- emerge from evolution? A new subbranch of science called emergent computation aims to tackle such questions. It creates abstract complex systems on computers and watches how they change. One of the simplest, aptly called Life, was invented by John Conway of Princeton in the 1970s. In Life the computer screen is divided into cells; at every tick of a clock the color of each cell is determined by the colors of the surrounding cells. These programs begin as a general mess, but frequently they simplify themselves spontaneously with the emergence of large but coherent substructures that grow in intricate patterns, respond to their surroundings, and even reproduce themselves. We have no well-developed ideas how or why this happens, but it seems to be related to nature’s tendency to produce patterns, like the markings on a peacock’s tail or the spiral arms of a galaxy. One thing is clear: since the atoms, molecules, and cells of a dog don’t know that its tail is wagging, the interaction of all the components of the dog must possess its own collective agenda. The whole, it turns out, really is greater than the sum of its parts. Perhaps chaos and complexity are so common because they bestow advantages on the things that contain them. Chaotic systems can respond to an outside stimulus far more rapidly than nonchaotic ones can. To understand why, think of tennis players waiting to receive a serve. Do they stand still? Do they move like a pendulum from side to side? Of course not. They dance erratically from one foot to the other. In part they are trying to confuse their opponents, but they are also getting ready to respond to any serve that is sent their way. In order to move quickly in any particular direction, they make tiny movements in all directions at once. Maybe a chaotic heart can respond to a sudden stress without causing undo wear and tear on itself. Similarly, a nervous system that has developed from--and still preserves--an underlying sea of chaos could offer definite advantages to an evolving organism. It’s reasonable to believe that prey whose nervous systems incorporate chaos are harder for predators to catch. An ecosystem is like an organism, and not surprisingly, ecosystems tend to be exceedingly complex for similar reasons. It is advantageous for an ecosystem to evolve into a state of high diversity, for a diverse ecology has many more ways to recover from disaster. Everybody knows that a one-crop economy is a mistake, and so is a one-crop ecology. However, although we can easily see the advantages of diversity, it’s not so clear where it comes from. Evolution, after all, works with what is potentially possible, not with what might theoretically be desirable. The ability to fly would offer advantages to human beings, but so far evolution has failed to oblige. We must find a mechanism for the growth of diversity, not just a need for it. One particularly interesting kind of emergent computation, known as artificial life, may someday help answer these questions. Researchers program into computers some of the basic assumptions of Darwinian evolution--mutation, natural selection--and sit back to watch what happens. What happens, at least in very simple models of ecologies--such as one devised by Tom Ray at the University of Delaware, in which the organisms are programs competing for a computer’s memory space--is that life naturally exhibits long periods of stasis and mass extinctions. In Ray’s ecology, evolution begins with a single self-replicating organism. When enough copies exist, simpler organisms can subvert its reproduction, and parasites appear. Then hyperparasites appear, creatures that hijack the parasites’ own technique; then social creatures, which combine their efforts in order to reproduce; then more complex creatures, which borrow abilities from everything else, and so on. Many evolutionary biologists and paleontologists believe that the fossil evidence available to us suggests natural evolution proceeds in fits and starts. But many of them have assumed that external catastrophes, such as asteroid impacts and climate changes, were responsible. In artificial life, these fits and starts seem to be inevitable--but still puzzling-- consequences of the general features of evolution. Artificial life also seems to say to us that there is a natural propensity for evolving systems- -be they biospheres or cyberspheres--toward more complexity and organization. Tantalizing as such computer experiments are, the science behind them lacks a solid mathematical theory of such phenomena, one explaining just why such things happen. We also need to understand the universal patterns common to all evolutionary systems, both organic and inorganic. For example, parasites occur in artificial life, real life, technology, society--even the stock market. There is some universal principle behind parasites, but what is it? Perhaps our bodies and the ecosystems here on Earth are just the manifestation of the larger roles chaos plays in the universe. Chaos may, for instance, be a key to solving one of the big puzzles of classical physics, the arrow of time. In a time-reversed universe, rivers would flow upward from the sea to the mountains, and the inhabitants wouldn’t be in the least surprised. They would explain that rivers are formed by an excess of water in the sea, deposited by devaporation, creating excess pressure that forces water up into the mountains, until it forms pools that spurt raindrops up into waiting clouds. Dynamics put in reverse is self- consistent, and so apparently it can’t forbid such a thing happening. In fact, any mechanical system, such as the one consisting of all the molecules in a room, is time-reversible. If all moving particles are simultaneously reversed, then the system will retrace its steps. Scrambling an egg is a legal dynamic process, and therefore so is its time-reversal. So why don’t eggs unscramble in our world? The usual answer is to invoke the second law of thermodynamics. There is a quantity called entropy, which measures the amount of disorder in a system, and entropy must continually increase. The arrow of time is the direction of increase of entropy. A scrambled egg is more disordered than an unscrambled one, entropy increases, and that’s the way time flows. Fine. So how did the chicken create the ordered egg from disordered chicken feed? Do living systems somehow borrow a decrease of entropy from their environment? Do they push the entropy into their environment, making it even more disordered than it would otherwise have been, and use the spare negative entropy to build an egg? Chickenkind has been borrowing an awful lot of negative entropy over the millennia. Entropy as it is currently conceived may not make a good arrow. The law arises from a particular thought experiment: the interaction of two previously independent systems. Imagine two cats, one black and one white. The white cat has its own set of white fleas; the black cat has black fleas. If the cats don’t meet, their respective fleas hop around randomly on their own personal cats, exhibiting a specific degree of disorder, or entropy. This remains constant--until the cats meet. The fleas can then hop from one to the other. Now both cats have a mixture of black and white, creating a kind of gray cloud of fleas. Just as one party with ten children is far more chaotic than two parties each with five, there is more disorder among the gray fleas because there are more fleas! That’s all very well--but the reasoning says that if single, isolated systems are left to themselves instead of interacting, they have constant entropy. The relentless-increase-of-entropy rule does not apply to a single free-running system and so does not conflict with the time- reversibility of dynamics. If you ran the cats backward, each would depart with a set of fleas, and you would then define the white fleas to be those on the white cat. Where originally there was just one system--a pair of cats with a shared pool of fleas--there are now two distinct subsystems, each comprising one cat and its set of fleas. Thus it seems that when previously integrated systems become isolated, entropy decreases. Nothing relentless about that. The point is that entropy can’t in fact account for the arrow of time, because it’s based on the mathematical fiction of isolated systems. Of course scientists knew that the cats have to interact with their surroundings, but the interaction seemed so small that they could safely ignore it. That was in the days before chaos. Imagine time-reversing some almost-isolated subsystem of the universe. For a short period it really will seem to undo its previous behavior, proceeding backward. However, it continues to interact--albeit very weakly--with the unreversed portion of the universe. That interaction, we have good reason to believe, is chaotic. And one of the basic features of chaos is the butterfly effect: very tiny changes become amplified to produce major changes in the observed motion. The flap of a butterfly’s wing can change the weather a month later. On the strange attractor that represents the weather, two points that are infinitesimally close to each other quickly diverge over time on different paths. So, in the time-reversed portion of the universe, the butterfly effect comes into play, and soon that subsystem is no longer following the intended time-reversed motion. This has nothing to do with random motion: the chaotic fleas bouncing around on the cats are performing their own predetermined dance. The laws of physics imply that the dance can run backward. But suppose we manage to reverse the dance of the fleas on the black cat, while leaving those on the white cat running in the original direction. Only if the black cat were truly isolated from the rest of the universe could the laws of physics allow it to keep running backward forever. Given the butterfly effect, even the tiniest degree of nonisolation is fatal to the argument. Say we remove the white cat a billion light-years from the black one. When the white cat thinks about having a scratch, the molecules in its brain will shift slightly. That in turn will change their gravitational attraction. When this slight ripple in the gravitational web of space makes itself felt on the black cat, it will alter the dance of the black cat’s fleas. This incredibly tiny disturbance will grow, following the butterfly effect, and quickly the black cat’s fleas will be following an entirely different dance from the supposedly time-reversed one. Because of the butterfly effect, you can’t get away with saying there is such a thing as an almost isolated system. Only an absolutely isolated system is time-reversible. Strictly speaking, there is only one of these: the universe as a whole. As an isolated system, the universe’s total entropy remains constant. That means that it has no way to point time’s arrow, so we have to look elsewhere to understand why every time we observe the transition from an unscrambled egg to a scrambled one, it always goes the same way. The answer seems to be related to the external conditions under which the transition takes place. Let’s say that a scrambled egg sitting on your breakfast plate spontaneously starts unscrambling itself. Molecules that escaped into the air during the cooking return; the long protein strands repair themselves; the yolk and white separate. Going in this direction, the egg has to hope that the conditions surrounding it are precisely right. If the black and white cats rub against the table leg, the vibrations will prevent the molecules from reconstituting themselves as they’re supposed to. Running forward, systems are apparently immune to cats and other interferences. That may be because an egg and a human and a galaxy are all the products of a system that’s been organized for billions of years. The history of the entire universe has to fit together consistently, and the butterfly effect destroys any attempts to reverse small bits of it. While chaos may run the universe on its greatest scale, it may also be at work on its smallest. On the level of subatomic particles, Lady Luck seems to rule. Radioactive atoms decay at random, their only regularities being statistical. A large quantity of radioactive atoms has a well-defined half-life, a period of time during which half the atoms will decay. But we can’t predict which half. This randomness isn’t just a matter of ignorance; it’s explicitly built into the theory of quantum mechanics. Albert Einstein protested this sort of institutionalized nihilism, saying that God doesn’t play dice. Is there really no difference at all between a radioactive atom that is not going to decay and one that’s just about to? Then how does the atom know what to do? Might the apparent randomness of quantum mechanics be fraudulent? Underneath the confusion, is chaos really at work? Perhaps it would be useful to think of an atom as some kind of vibrating droplet of cosmic fluid. Radioactive atoms would vibrate very energetically, and every so often a smaller drop could split off--what we would perceive as decay. The vibrations would be too rapid for us to measure in detail, so that we could only measure averaged quantities such as energy levels. Now, classical mechanics tells us that a drop of fluid can vibrate chaotically. Its motion is deterministic--in other words, it is ruled by simple, comprehensible laws--but it is unpredictable. Occasionally, seemingly at random, the vibrations conspire to split off a tiny droplet. The butterfly effect makes such an event unpredictable; but it has well-defined statistics, a half- life. Could the apparently random decay of radioactive atoms be something similar, but on a microcosmic scale? After all, why are there any statistical regularities at all? Are they traces of an underlying determinism? Where else can statistical regularities come from? Unfortunately, as with so many other applications of chaos, nobody has yet made this seductive idea work. But it would be an appealing way to render the deity’s dice deterministic. Chaos may be making many great scientists turn in their graves, but perhaps it can at least keep the shade of Einstein happy.