Time to feed the Metabolism. Hands encased in two layers of latex gloves, Steen Rasmussen reaches behind the thick wall of Plexiglas and lifts a small white tube. Gingerly opening the lid, he adds minute droplets of clear liquid, then deftly flicks the tube to blend this food with the radioactive brew within. Though there is little for the eye to see, the Metabolism is busy. The microscopic collection of molecules has perked up its chemistry and begun to grow. How much will it grow? Rasmussen himself doesn’t know. But he does know that if it’s deprived of food, or exposed to too severe a change, the Metabolism will, well, die.
Actually, the Metabolism is a deceptively simple concoction of chemicals--they happen to be different forms of DNA, but in theory they could be other molecules--that assemble themselves into longer chains. That’s an encouraging start, but it’s not necessarily a step toward life. Nonliving things grow, too--crystals, for example.
Right now Rasmussen is egging his creation on. He wants to prove that it grows in a particularly lifelike way, not just passively, like a snowflake. He has designed the Metabolism as a cooperative jumble of chemical activity: each of the many reactions lends a hand to one of the others. Once he provides the raw materials, the reactions should reinforce each other until an ongoing, self-maintaining symphony of interactions has pulled itself up by its bootstraps. This cooperative chemical network will then sustain itself for long periods of time, requiring only occasional feedings--that is, replacements of the one chemical that happens to break down quickly. That, at least, is the way it’s supposed to happen; it’s too soon to say whether the thing in the tube is playing by these rules.
Even if Rasmussen can demonstrate that the Metabolism actively pulls itself together, will that mean it has crossed the threshold between mere chemistry and life? Rasmussen won’t make that claim. I call it synthetic protolife, he says. It’s somewhere between living and nonliving. In fact, while his wisps of DNA draw matter and energy from their environment to maintain themselves and grow, they approach the dictionary’s definition of a metabolism. For now, the threshold between living and nonliving isn’t clearly defined, and the debate is certain to become more intense as Rasmussen and other ambitious researchers come up with complex entities that seem more and more lifelike.
If creating synthetic life can be thought of as extreme science, then 37-year-old Steen Rasmussen is the ideal perpetrator. Racing across the eerily beautiful New Mexico landscape in his beat-up old Nissan 280ZX, the six-foot-two-inch Rasmussen offers a steely Nordic grin to the blast of frigid winter air that whips through the sunroof he has opened to better enjoy the experience. I’ve been reading Frankenstein lately, he shouts above the tumult, as he reaches over to turn off the heat. It makes you wonder about what you’re getting into when you’re playing around with inventing life.
Nature beat him to it, of course, but nature had at least a billion years and the entire surface of Earth for conducting the vast number of chemical experiments that eventually coughed up the first protoplasm. Rasmussen’s only hope for beating the tremendous odds stacked against him is to eliminate most of the trial and error. To that end, he has enlisted the powerful computer at Los Alamos National Laboratory, where he specializes in cooking up hypothetical forms of life--but only in theory. Specifically, he is pushing a newly burgeoned field of theoretical work called artificial life to its limits, seeing if computer-assisted design will give him the insights he needs to play God with real chemicals.
Artificial-life researchers create all manner of cells, animals, plants, and often entire communities or ecologies boasting a wide range of interacting life-forms, but they do it all on a computer. These electronic creatures are typically viewed video-game style on a screen: they mill about, chase each other, hunt for food (other blobs on the screen), mate (nothing very exciting--they just combine), and otherwise perform the basic functions of life. Unlike the action in a video game, however, these kinds of behavior are not specifically programmed; instead, researchers provide their creatures with simple rules of interaction and then sit back and watch what happens. Thus, for example, one computer creature might be told to move toward food and away from any other creatures. It’s anyone’s guess what direction it will move in when it’s surrounded by both food and other creatures.
Because artificial life’s denizens can be made to live out their full life span in millionths of a second, thanks to the power of today’s high-speed computers, researchers can in a few hours observe evolutionary experiments that would require millions of years to unfold in real life. Most artificial-life researchers believe that their simulations not only mimic nature’s known processes but can also actually reveal biological truths that have never before been identified.
But artificial life’s great advantage is also its biggest drawback: it all happens inside a computer. Most scientists don’t consider anything discovered until it’s observed in the real world, within reach of experimenters’ microscopes, scales, and rulers. And little had been done to transfer the doings of artificial life from computer to lab bench until Rasmussen took up the challenge.
Rasmussen, a Dane by birth, had intended to major in biology at the Technical University in Copenhagen, but he became so absorbed in the required math and physics courses that he ended up majoring in those subjects and going on for a Ph.D. in physics. By that time he had already built a reputation for nonconformity. A father at 19, and a high jumper who competed at the national level, Rasmussen liked to prowl up and down the coast on his 30-foot, 60-year-old wooden racing sailboat, or hitch his infant daughter on his back and disappear into the forests on foot or ski for a day or two. Visitors to his apartment had to confront Elmer, the mounted head of a gigantic moose. Everyone thought I was a crazy man, he recalls.
His area of specialization didn’t do much to dispel the image. Inspired by the ideas of Nobel chemists Ilya Prigogine in Belgium and Manfred Eigen in Germany, Rasmussen threw himself into the new and somewhat mysterious field of self-organization, which examines how an exquisitely complex system can emerge from components with mundane properties. Imagine, for example, how the first primitive cells, carrying simple chemical instructions to preserve themselves through endless reproduction, evolved to crank out more and more complex life-forms. This rise of complexity seems to violate a fundamental law--that randomness tends to increase in nature--but Prigogine and Eigen developed mathematical methods that demonstrated how mixtures of simple chemicals could self-organize without violating this law.
The key, Prigogine realized, was the chemicals’ ability to pull in raw materials and energy from the environment, which they used to construct and repair their complex forms. Meanwhile, the environment actually became more random as it absorbed waste materials and heat thrown off by the chemical reactions. Overall, randomness in the universe was increasing, even with tiny islands of well-organized chemicals spontaneously popping into existence. Eigen applied this abstract reasoning about chemical systems to natural selection and evolution, showing how certain complicated classes of chemical interactions could eventually spit out the first life-forms.
Some researchers drawn to Prigogine’s work, in particular, saw it as a new paradigm. To them, all of nature seemed constructed on the self- organizing principle: though everything around us, from tube worms to human societies, appears highly complex, infinitely varied, and subtle, these researchers believed the underlying principles are as simple and straightforward as the laws governing how two chemicals combine in a beaker.
Unfortunately, there were no established mathematical techniques for dealing with the explosion of complexity that emerges from all the possible interactions that can occur among even a few simple components. No equations gave quick answers to how most systems would evolve. Frustrated by the lack of tools for analyzing self-organization, Rasmussen turned to the computer and began simulating abstract systems of atomlike dots, an approach he called experimental mathematics.
Soon he was spending 12 hours a day in front of the computer, observing the way his dots evolved. In those days his artificial world might be composed of black and white dots, following rules as simple as these: If you have three black neighbors and you’re white, you turn black. If you have five black neighbors and you’re black, you turn white. These rules are stupid, Rasmussen admits, but that was the point of the whole exercise. Before long he was getting his dots to organize themselves into intricate patterns. Given other, equally simple rules, his dots would start clumping together like flocks of birds, adjusting to each other yet maintaining order. Rules that at first seemed stupid suddenly made sense. After being restricted to the formalism of mathematics, Rasmussen explains, going one-on-one with the computer and getting that immediate feedback was like taking off a straitjacket and flying.
Was there anything to be learned from all this, or was Rasmussen reading more into his ill-defined patterns than was actually there? His work baffled his fellow physics researchers. One day in 1988 a friend of Rasmussen’s was surprised to receive in the mail a description of an upcoming conference in the United States sponsored by the Los Alamos National Laboratory; the notice suggested that there were entire groups of respected scientists engaging in these weird, computer-based investigations. Recalls Rasmussen: My friend burst into my room holding up this paper and saying, ‘Look, Steen, these guys are doing stuff as crazy as you!’ Though the deadline for submitting work had passed, Rasmussen shipped off a stack of his papers and quickly received an invitation to attend. Within weeks he had accepted a full-time joint position at Los Alamos and the nearby Santa Fe Institute, a scientific think tank.
Almost everyone at the Santa Fe Institute studies some form of self-organization, whether it’s how a protein can emerge from a soup of amino acids or how a stock market can crash when a simple set of buy and sell decisions ripples through the system. We’re not sure any of this is real science, but that’s okay, jokes Ed Knapp, the institute’s president.
Only a few years ago the artificial-life group was a quirky sideshow at the institute, but it has recently exploded into prominence. The most visible participant is Chris Langton, a rough-hewn figure in jeans and camping vest who even at an organization where a bolo tie passes for formal attire looks like someone who has walked in off a construction site. One day at the institute Langton is bouncing from room to room in a vain effort to keep up with his colleagues’ requests for his time, pausing only occasionally to grab a bite from a rapidly cooling Hardee’s burger. I feel like a highway is being built on top of me, he sighs to an associate who has cornered him. What else is new? responds the colleague.
Langton is one of the most passionate advocates of the strong claim for artificial life. The weak claim is controversial enough: it contends that in doing good simulations of life, you can learn about real life. Many people find that assertion hard to swallow, insisting that real life depends on far more variables than a computer program can take into account. No wonder, then, that these doubters positively choke on the strong claim, which, simply put, insists that artificial life, if it evolves into something more sophisticated than today’s versions, could conceivably be real life. Life is a process, and it shouldn’t matter what the hardware is, says Langton. If a simulation meets the criteria for life put forth by biologists--that it can maintain itself, that it self- replicates, that it evolves--then it should be valid to ask if it’s alive.
Not surprisingly, Rasmussen fit right into this environment. Langton and his cohorts even liked Elmer, offering to move him to the Xerox room and imbue him with speaking capabilities. (For now, at least, Elmer remains in Rasmussen’s cramped Los Alamos office, sporting sunglasses.) I had felt like a nut back home, says Rasmussen. It was such a surprise to come here and find so many people like me.
Rasmussen immediately got to work whipping up all sorts of computer exotica meant to explore the strange world of self-organization. One of his most interesting creations is the Electronic Garden. To make his garden grow, Rasmussen has enlisted the computer’s memory as soil and a small number of simple programming instructions as seeds. The randomly distributed instructions, which perform such primitive tasks as pulling a small chunk of data out of one location of memory and moving it to another location, are represented on the screen as different-color dots.
Typically, the instructions end up scattering themselves uselessly around the memory; the screen either stays dark or erupts in futile bursts of colorful static. But once in a while a group of instructions will bump into each other, join together, and begin to cooperate in a felicitous way. A dynamic pattern starts to sprout; eventually, the pattern can grow into a sprawling community of cooperative programs that take over all of the memory with their frenetic but ordered activity. Viewed on-screen, the seventy-one-hundredth generation of one such garden looks like an overhead view of a bustling city at night, complete with office buildings, heavily trafficked boulevards, and freight yards. The addition of noise to the garden--that is, the insertion of random errors here and there--causes the flickering lights to scurry even harder in an effort to maintain their neat blocks and bustling avenues; as a result, the programs and the patterns they produce on the screen evolve into even more complex forms. Look at these little critters, beams Rasmussen. Aren’t they happy?
While working on such abstract systems, Rasmussen was also wrestling with calculations intended to prove that the kinds of things happening in the computer were also possible in the real world. He took the point of view that matter itself is programmable--that is, given a few simple rules about how certain molecules interact with one another, a random assortment of substances could spontaneously arrange itself into an exceedingly complex system.
Most of Rasmussen’s fellow artificial lifers take the same view; what distinguished Rasmussen was that he became determined to prove it in a laboratory. I couldn’t count the number of people who have told me it’s an absolute waste of time for a theorist to go into the lab, he says. But I’ve always had my eye on bringing all this physics and math back to biology. And if you’re serious about learning how biological molecules self-organize, you need to get out there and get your hands dirty.
But he wasn’t quite sure how to go about doing that. Rasmussen had done some computer modeling of interactions involving RNA. Of course, RNA is a real molecule found in all living cells, one that, like DNA, carries instructions for assembling copies of itself. He already knew, however, that RNA would not mimic the simple self-organizing feat he had in mind; it can’t make copies of itself when isolated from the complex chemistry inside the cell.
At the same time Rasmussen was looking for a suitable real-life molecule, other Santa Fe researchers--Doyne Farmer, Stuart Kauffman, Norman Packard, and Richard Bagley--were employing computer simulations to analyze an autocatalytic set, a group of self-sustaining chemical reactions in which each one produces a molecule that catalyzes, or facilitates, one of the other reactions (by holding two or more molecules together or ripping a molecule apart). Farmer and the others were convinced that life had begun with such an autocatalytic set, which would have served as a primitive metabolism--something that could pull itself together out of a simpler chemical soup and then maintain its structure.
This view is not universally accepted. In fact, most origin-of- life researchers theorize that metabolisms were preceded on the evolutionary ladder by a primitive version of RNA. This early RNA was more versatile than today’s RNA: it could make copies of itself without help from other types of molecules. Because of its ability to reproduce, suggest the replication first proponents, such a molecule would have prospered in the pre-biotic soup. Even more important, because RNA would occasionally make errors when replicating, once in a great while it would have turned out an improved version of itself, thus lighting the fuse for competition, natural selection, and evolution. But on the question of where the first self-replicating molecule came from, origin-of-life theorists are a little vague. Perhaps, they say, it simply formed by chance. So far, however, efforts to create RNA in the laboratory by simulating primeval conditions have failed.
Farmer, on the other hand, was proposing a process that could have emerged from simpler molecules, such as proteins, which have spontaneously been created in laboratories; RNA-like replication, he says, must have occurred later. Although he and his associates had only simulated self-organizing chemical networks on the computer, he believed that the same results could be achieved in the lab. But he wasn’t the one who was going to do it. We’re not wet chemists, he says. Wet, in this usage, is meant to distinguish the world of real chemicals from the neater and more convenient simulations. To Farmer and many other artificial-life researchers, the wet world is simply a hopelessly messy version of the computer world. Chemistry is another form of programming, explains Farmer, except you have to spend a lot of time cleaning glassware.
Rasmussen didn’t feel that way, and he realized that Farmer’s work on autocatalytic networks gave him the bridge to the real world he was looking for. Unfortunately, Farmer hadn’t determined specifically which molecules would fill the bill, and Rasmussen is no chemist. I hadn’t done a lab experiment since high school, he says. He is, however, infectiously enthusiastic and anything but shy, and it wasn’t long before he was buttonholing researchers at Los Alamos, Santa Fe, and anywhere else he found himself to tell them his goals. The first to help out was Gerald Joyce, a replication-first researcher at the Scripps Research Institute in La Jolla, California, who suggested that Rasmussen enlist DNA for the autocatalytic set.
Turning to DNA would defeat one purpose of the metabolism first proposition, which is to avoid starting out with implausibly complex molecules. But Rasmussen wasn’t employing DNA’s remarkable ability to carry self-replicating information; he just needed DNA’s far simpler propensity for clumping stupidly together with other DNA molecules. In a sense, he was tying DNA’s hands behind its back to get it to stand in for a more primitive molecule. If he could demonstrate the principle of a cooperative chemical network with DNA, then it could always be attempted later with proteins. Besides, Rasmussen doesn’t see a need to take sides in the metabolism-versus-replication debate. My simulations show that a metabolism is easier to form than a self-replicating molecule, he says. But perhaps they emerged together.
Soon Rasmussen was gushing about his plan to Los Alamos biochemist Eric Fairfield. I said, ‘Steen, wait, slow down,’ recalls Fairfield. He really needed to figure out exactly which reactions would clearly demonstrate that his ideas worked. Though already putting in 60- hour weeks on the human genome project, in his spare time Fairfield worked with Rasmussen to plot out the step-by-step creation of the cooperative chemical network.
The first step, they decided, would be to place millions of DNA molecules into a tube filled with a solution of salts and organic substances that would make a friendly environment for self-organizing DNA reactions. Rasmussen would not work with the familiar double-helix version of DNA but with the individual strands that result from pulling the helix apart. Each of these strands would be a pure chain made of either adenine or thymine, two of DNA’s four chemical building blocks. Most of the DNA strands would be short, but Rasmussen would include a few longer strands of each type, as well as an enzymatic protein called a ligase that binds DNA molecules together end to end.
The setup seems complicated, but the plan was simple. At room temperature, most DNA strands pair up lengthwise with complementary strands: adenine pairs with thymine, for example, because those two bases have an affinity for each other. The paired strands take the form of a double helix. When heated, the strands separate. Rasmussen planned to heat and cool his mixture repeatedly. Every heating cycle would free the DNA strands to find new partners. If Rasmussen had everything right, his experiment would be a series of heat incubations, with ligase added as food each time, until a self-organized, self-sustaining metabolism emerged from the brew, much as proteins might have done billions of years ago in some sheltered lagoon.
After every incubation, many of the short adenine strands would snuggle up alongside short thymine strands, just as before. As for the longer strands, they would be so outnumbered by the shorter ones that they would be unlikely to find each other. However, a double-length strand might pair with two complementary shorter strands lined up end to end. When that happened, the ligase would fuse the two shorter strands into one double- length strand. These newly elongated strands would then help to link other shorter strands end to end after the next heating cycle. The more double- length product, the more would be created. In other words, complexity would help build more complexity through autocatalysis--the self-organizing process that Farmer was studying.
Eventually, though, this reaction would slow down as the supply of short strands was exhausted. The double-length strands would be more and more likely to pair up with other double-length strands. Sometimes two strands would try to snuggle up to the same partner--two adenine strands, say, with one thymine strand. If parts of both adenines stuck to the thymine so that their ends touched, the ligase could fuse the two adenines into an even longer, quad-length strand. As before, this reaction would pick up speed until the entire mixture consisted primarily of quad-length strands. Then the quads would start fusing as octos, and so on. After many stages the beaker would be full of long, self-assembled DNA strings.
Once the whole thing got going, all Rasmussen would have to do would be to occasionally sprinkle in more ligase, which gradually breaks down. At some stage the DNA would reach a maximum size, with growth balancing decay. The long chains would sometimes break apart, and the odds of that happening would go up as the DNA got longer. When the rate of breaking equaled the rate of joining, the metabolism would simply maintain itself.
At first glance the whole process seems deceptively simple, if not simpleminded, since in fact biochemists use ligase to join molecules all the time. But they typically intervene every step of the way: they remove by-products, increase the concentration of materials needed for the next step, and tweak a host of variables like temperature and acidity to get a good yield for the next step. Rasmussen, by contrast, was proposing a chemical tour de force--a system that would pull itself together through multistaged reactions, taking advantage of simple heat cycles to drive itself forward with minimal guidance.
Now all he had to do was find the time and the chemicals to realize his scheme. After all, he was moonlighting, too. I wasn’t being paid to do this, he says. I was supposed to be building mathematical models of self-organization. Scrounging up a couple of hundred dollars for the chemicals, Rasmussen worked after hours to make the reactions go. To measure progress, he separated out the longer strands with an electric current. (DNA is electrically charged, and the longer strands flowing through an electrified gel lag behind the shorter ones.) He then estimated the quantity of long strands via radioactive tracers he had embedded in the short strands. All this was an exercise very different from Rasmussen’s familiar computer routines, where thousands of successive reactions can be simulated in a fraction of a second. Here, I had to put something in, wait four hours, then add something else, wait overnight, add something else, wait half an hour, and so on, he says. Sometimes I’d get busy and forget when it was time for the next step, and I’d have to start all over.
Meanwhile, he had to play around with DNA and ligase concentrations, temperature, salt content, and other variables to get the reactions to kick off. It took Earth a billion years to hit on a mixture where everything was just right, but fortunately Rasmussen has an advantage: molecular biologists have already figured out the basic conditions that support DNA reactions. Before long, he was getting the DNA to grow longer.
Rasmussen can’t make the claim that he’s created a metabolism until he proves that the reaction is proceeding through the self-propelled stages he envisioned. It’s conceivable that the DNA is growing longer through some other mechanism. The ligase, for instance, might simply be joining short strands whose ends touch when two are trying to pair up with the same short complementary strand. In that case the presence of longer strands wouldn’t matter, and Rasmussen would be watching simple ligase chemistry at work rather than a self-reinforcing, autocatalytic set.
At this point, Rasmussen needs help to hone his skill at measuring. It’s just a matter of getting the right people together for two days to get this thing to work, he says, but with all the funding and other pressures on everybody here, that’s been very hard to do.
Even if he runs tests that conclusively rule out the less interesting ways his DNA might be growing, that will be just the beginning, Rasmussen insists. What I’m trying to do right now is really only to demonstrate the basic principle of an autocatalytic network, he says. But the next phase will be much more like a primitive metabolism. That will involve trying to get the network to emerge from a more random mixture of DNA strands, to bolster the plausibility of a metabolism spontaneously arising from a primeval soup. Eventually, he’d like to see the network constructed from the kinds of proteins that would have been available in that soup--but he doubts he himself will be the one to do it. Protein chemistry is just too complicated, he explains, noting that the chemical properties of proteins depend on the complex ways they fold--a variable no computer has mastered.
No matter how sophisticated a synthetic metabolism becomes, biologists aren’t likely to welcome it officially as a form of life. Notes Langton: Biologists haven’t ever had to come up with a good definition of life, because, except for viruses, it’s always been obvious--something either moves around and leaves little messes or it’s a rock. They produce lists of requirements, but if we make something that meets those requirements, they’ll just add something else to the list.
Rasmussen himself doesn’t seem worried about whether his metabolism meets biologists’--or anyone’s--criteria for a form of life. Not that he doesn’t think about the ramifications of what he’s doing; it’s just that he tends to save such meditations for an especially stimulating environment--like the precipitous, mogul-strewn slope that he was skiing down one day last winter. Racing with typical abandon, he was soon lost from sight. He turned up later at a level spot, leaning on his poles, lost in thought. I’m tinkering with life, he blurted out. Am I supposed to be doing this? He shrugged, and then was off down the next mogul field.