If He Only Had a Brain

Right now it's floating in a dish in Japan. Someday it may be offering you advice.

By David H FreedmanAug 1, 1992 5:00 AM


Sign up for our email newsletter for the latest science news

Sipping green tea in his cramped Yokohama office, speaking carefully and politely, Masuo Aizawa doesn’t exactly seem like mad-genius material. The notion seems even more dubious when the 49-year-old scientist shows off his pride and joy: a thing that looks like a glass slide sitting at the bottom of a plastic dish filled with a clear liquid. The slide is an electronic chip of sorts, though a peek under the microscope suggests it is a crude one. Instead of the intricately carved circuits and byways of modern chips, this one offers plain stripes; where conventional chips are adorned with millions of tiny transistors, this one seems to have been splattered with mud.

But appearances are misleading. This chip is really a slice of technological chutzpah. Those spindly, muddy blobs on Aizawa’s chip aren’t defects but custom-grown nerve cells that have been arranged into the precursor of a biological electronic circuit--the first step, says Aizawa, toward the cell-by-cell construction of an artificial brain. Perhaps this is just a faraway dream, he says, chuckling. But we are approaching it in steps.

Aizawa, a biochemist at the Tokyo Institute of Technology, has been captivated by the human brain’s computing abilities. By exchanging electrical signals among themselves, the 100 billion nerve cells, called neurons, in the bony vault perched on top of your neck can recognize a face at 50 yards, hold a rapid-fire conversation, and keep 70 years’ worth of vivid memories at ready access. The world’s most powerful computers, meanwhile, can’t keep up with the patter of a four-year-old. Why not, Aizawa asks, go with the better technology?

Lots of scientists have devoted their careers to probing the secrets of the brain. And many researchers have designed computer programs and even chips that attempt to mimic a neuron’s properties. Where Aizawa stands apart is in trying to blend the two efforts--to get one of nature’s most sophisticated cells to serve as a living electronic component in a man-made device that could make transistor technology seem like Stone Age stuff. A neuron looks bigger than a transistor, he says, but it processes so many signals that it’s really more like an entire computer chip in itself. I think we can use it to make biocomputers.

To be sure, Aizawa is a long way off from building a computer out of neurons. In fact, the thin stripes of cells laid out on his chip can’t do anything useful yet. (And in fact these cells aren’t actually neurons; they derive, however, from the same parent cells that neurons come from, and after some chemical manipulation they function in much the same way.) But growing orderly arrays of nerve cells on an electrically conductive surface was a formidable task in itself, one that required nearly a decade of painstaking trial-and-error experiment. And the results have left Aizawa poised to construct simple nerve circuits that can gradually be made more and more complex. Perhaps they can be made to be even more complex--and useful--than today’s transistorized chips. It may be as long as 20 more years before he succeeds, Aizawa concedes, but that is all the more reason to make sure he doesn’t lose any time on the early steps.

The brain excels at recognizing patterns and learning because, unlike computers, it doesn’t try to accomplish them in step-by-step fashion. Instead it employs billions of simple computers--neurons--that work in parallel, producing a complex web of signals that surge back and forth, triggering each other. This web can take in different pieces of information coming from the various senses--for example, long ears, eating a carrot, chased by a man with a shotgun and a speech impediment--and come out with an identification: Bugs Bunny.

This approach to information processing is known as a neural network. It works by making connections among groups of neurons that respond in a particular way to the sight of a carrot, other groups of neurons that respond to Elmer Fudd, and still other neurons that fire a unique pattern of signals that means, to your mind, only one thing: that wascally wabbit. Of course, it isn’t quite that cartoon simple. The carrot- recognition neurons must already have learned, through connections with other neurons that respond to orange and long and skinny and edible, what a carrot is; the Fudd-recognition group must have gone through a similar process; and so on. Now, a standard computer program could just as easily pull a rabbit out of a data base by searching for these characteristics. But your brain can do the same trick with the multitude of sounds and nuances emerging from a symphony orchestra (Aha! Beethoven’s Ninth!) or the points in a pointillist painting; one data base program couldn’t handle those disparate tasks. And your brain performs this recognition feat instantly. It would take a data base program, even one running on a powerful supercomputer, much longer to search through every snatch of music you’ve ever heard, or every face you’ve ever seen, to find the correct match.

Furthermore, your brain teaches itself. The way your brain learns to pick its way through this maze of competing signals is, through trial and error, by strengthening those signals that eventually yield the correct answer (Er, actually it’s Mancini’s ‘Baby Elephant Walk’). This often occurs through the proper neurons’ repeatedly firing--which is why you learn a new phone number by saying it over and over again to yourself. The more often a connection is used in the brain, the easier it is to pass a strong signal through it.

Spurred by the growing realization that the brain has a good thing going for it, computer scientists have been turning in greater numbers to the design of neural-network-style computer programs. They usually take a few thousand sections of a computer’s memory and use them as ersatz neurons: an initial layer of such neurons is programmed to accept input from the world outside and to pass on electrical signals of varying strengths to another layer of neurons. Those neurons tally the signals and decide what they mean by passing signals along to yet a third layer of neurons. In this third, output layer, each neuron stands for a different answer: a different name, say, or a different direction to move. The first time the network makes a connection between an input face and an output name, for example, the answer is just random. But after making the network do this again and again, scientists can instruct the program to strengthen those connections that lead to the right name, and weaken those leading to the wrong name. After a while, the network gets it right every time.

But these results, while promising, have yet to bring computers anywhere near the level of the human brain--or even of a bird brain, for that matter. One problem is that the hardware and software employed by scientists to imitate the functions of a neuron fall far short of the real thing. Biological neurons can accept thousands of simultaneous signals and instantly determine whether or not to fire and pass the signal on to another 10,000 or so neurons. Trying to provide that sort of high-speed connectivity to even a few thousand ersatz neurons--never mind billions-- can be enough to bring a supercomputer to its knees. To get better artificial neural networks, concede many researchers, they need to develop more neuronlike software or electronic components.

Or, of course, they could use the real thing and put actual neurons onto a chip. But that’s not an idea that many scientists would be willing to run with, thanks to a few nagging obstacles: it’s tremendously difficult to grow neurons, hook them together, or conveniently get signals in and out of them on such an unconventional medium. Individually, these problems are tough enough; together, they appear overwhelming.

Aizawa has a few advantages, though. For one thing, Japanese businesses and government agencies, which work in concert to fund scientific research, are far more encouraging of long-shot endeavors, even if they seem almost wacky by the standards of U.S. funding agencies. Japan is particularly free-spending when it comes to possible breakthroughs in computer technology. Stung by their inability to catch up to U.S. companies in conventional software technology, the Japanese government and various businesses have thrown billions of dollars into large-scale efforts to help the country leapfrog into leadership of more futuristic computer technologies. Thus Aizawa--who chairs one of the government committees dispensing some of this money--has not exactly lacked for support, financial or otherwise.

And then there’s Aizawa himself. Though he professes a fondness for music by the Carpenters, actually he is possessed of a strong inclination to go against the mainstream. One tip-off: when millions of Japanese are stepping into the world’s most formidable rush hour to commute into Tokyo, Aizawa is leaving his Tokyo home and commuting 20 miles out of the city to the Yokohama campus of the Tokyo Institute of Technology. On that campus, Aizawa has made a career of doing something that many scientists once insisted couldn’t be done: transplanting biological processes from the comfort of living tissue to the harsh world of man-made devices. I call my approach superbiology, he says. Biological components are supposed to be ideally suited to their natural environments, but we find ways to adapt them to our artificial systems and make them perform even better than they normally do.

Aizawa grew up near Yokohama, the son of a banker. He did not have a natural inclination to science. I loved history in high school, and I hated chemistry. To try to develop a liking for it, I joined the chemistry club, where we did experiments after school. Oddly enough, the strategy worked so well that Aizawa ended up majoring in electrochemistry-- the study of chemical reactions that are enhanced by, or that create, electric fields--at Yokohama National University. In his third year he attended a lecture by a visiting Tokyo Institute of Technology professor, Jun Mizuguchi, who predicted to the audience that biology was going to have a huge impact on technology in the coming years. I was very impressed, recalls Aizawa. I talked with him afterward, and he encouraged me to enter this field. I decided then that I would learn the mechanics of biological systems.

There was one biological system in particular that pulled Aizawa in this new direction. My whole reason for being interested in biology surely had to do with the brain itself, he says. I had realized that what I am most interested in is human beings, and the most important thing of all is how we think. I knew I wanted to get into brain science, but I wanted to approach it step-by-step from the long way around, taking a technological point of view. Though he didn’t have the precise steps laid out yet, he knew that neurons were far too complex to tackle directly. First he would have to achieve some sort of technical mastery with ordinary cells; and even before that, he would need to work with parts of cells.

In 1966 Aizawa entered the Tokyo Institute of Technology, nominally as a graduate student in biochemistry, but determined to add a twist to the subject that would carry him toward his distant goal. I tried to create a new field for myself, he says, an interdisciplinary field that combined life sciences and electrochemistry. He quickly found a cellular component to focus on: mitochondria, which extract energy from sugar and turn it into small banks of electric charge. He was soon at work on a biobattery, in which the proteins that make mitochondria go were cajoled into performing their trick in a tiny electrode-equipped jar.

The battery worked, but its modest 1.5 volts, as well as the tendency of the complex proteins to break down quickly, precluded its application as a commercial battery. Unfazed, Aizawa converted his biobattery into a supersensitive glucose detector: when even trace quantities of glucose (a sugar) were present, the device put out a tiny but detectable current. The little jar thus turned out to be one of the first so-called biosensors and was eventually developed into a version that can, among other applications, help diabetics monitor their blood sugar level.

After graduating from the institute in 1971 and accepting a research position there, Aizawa continued to hone his bioengineering skill, designing the first immunosensor--a device that employs antibodies of the sort found in our immune system to ferret out and lock onto almost any sort of foreign molecule. Antibodies to particular disease-causing organisms are used to coat an electrically conductive surface. A sample of a patient’s blood or lymph fluid is placed on the surface. If the antibodies grab onto anything in the fluid, it changes the voltage signal across the surface, indicating there’s something there that shouldn’t be present.

Yet Aizawa hadn’t lost his interest in the brain; in the back of his mind, he wondered if there wasn’t some way to do with nerve cells what he had accomplished with mitochondria and antibodies. If he could somehow couple nerve cells to an electronic device, he might be able to fashion a crude, semi-artificial neural network. But that would require growing nerve cells on electrodes--that is, on some sort of conductive surface--so that electric signals could be inserted into and extracted from the cells. In 1980 that was an outrageously farfetched notion; even ordinary animal cells hadn’t been grown on electrodes, and mature nerve cells are so much more delicate that it was all but impossible at the time to culture them in even the most hospitable media. Animal cells find many different types of substrates friendly, explains Aizawa. But neural cells have a very delicate sense of friendliness.

Aizawa, who had by now moved to the University of Tsukuba, decided to tackle ordinary cells first. He tried to get the cells to proliferate on a number of different conductive and semiconductive materials, including gold, platinum, titanium, carbon, and conductive plastics. The best results, it turned out, came with the semiconducting compound indium tin oxide: grudgingly, the cells divided and increased in number. But the key, Aizawa knew, was to be able to control that growth, to make the cells form patterns that might eventually form the basis of an electronic circuit. Eventually he wanted to use nerve cells, and when nerve cells grow, they send out long, tentaclelike formations called neurites; it is through interconnected webs of neurites (known as axons and dendrites) that nerve cells in the body transfer electrical signals to one another. But if Aizawa grew nerve cells on his slide and they were free to throw out neurites in every direction, he would end up with a dense sprawl of haphazard growth that would defy any efforts to study, let alone influence, signal transmission.

On a hunch, he tried placing a small voltage--on the order of a tenth of a volt--across the coating. He reasoned that because a cell membrane contains molecules with a slight electric charge, they might respond to a similar charge in the surrounding medium. That charge seems to trigger movement among the molecules, bunching them together to plug holes in the membrane that allow chemicals that stimulate cell growth to enter. Sure enough, the tiny voltage slowed cell growth down, although it didn’t stop it completely, and didn’t seem to harm the cell.

Yet to build a primitive neural network, Aizawa knew he would need to do a lot more than hobble some cells. He’d need an orderly array of nerve cells; in fact, the best way to examine signal transmission would be with a long, single-file string of connected nerve cells. With a string of nerve cells, it would be somewhat easier to introduce a voltage at one end of the string and then detect the output signal at the other end, or anywhere in between. It would also allow him to perfect techniques for strengthening various neural connections through repeated firing, and perhaps to discover other ways of influencing the transmission of signals. Once the properties of neural strings were mastered, the strings could be run side by side to form an interconnected array, much like the computer- simulated neural networks.

So Aizawa tried to fashion cellular strings. He continued to study ordinary animal cells, exposing the cells growing on the indium tin oxide to a wide variety of voltages. By the mid-1980s, he had discovered that different voltages had different effects. While .1 volt slowed cell division slightly, voltages of .2 and .3 depressed it even more. A charge of .5 volt was too hot; it usually proved fatal to the cells.

A voltage of about .4, however, turned out to be just right. It stopped the animal cells from dividing without otherwise affecting their function in any way. I was amazed, says Aizawa. It was as if they went into hibernation. He realized that this discovery could be exactly the one he needed: if the right voltage froze animal cell division, perhaps it could also be employed to control neurite growth.

In 1985 Aizawa returned to the Tokyo Institute of Technology to found its department of bioengineering, and he continued his research. By 1986 he was ready to try his hand at nerve cells. To improve the odds, Aizawa and graduate student Naoko Motohashi (one of Japan’s relatively rare female scientists) decided to work with a type of cell known as PC12 rather than jumping into neurons. PC12 cells are a special line of cells derived from a tumor in the adrenal gland of a rat, tissue which stems from the same cells in the fetus that give rise to nerve cells. They are more rugged than true nerve cells and easier to grow in culture. And they perform one very neat trick. The cells divide rapidly, as tumor cells are wont to do, until they come into contact with a substance known as nerve growth factor, or NGF. Then PC12 cells stop dividing and within three days start to grow neurites. Within two weeks they are converted into fully functional nerve cells.

At first, the PC12 cells wouldn’t reliably grow into nerve cells on the indium tin oxide. But Aizawa and Motohashi kept at it, varying the voltage, the temperature, the thickness of the coating, the cell nutrients in the fluid in the petri dish in which the slide was submerged (the main ingredient was calf serum), and every other variable they could play with. We had to keep refining our experimental technique until we found exactly the right conditions, he recalls. After several months they finally had nerve cells growing on the oxide--but the cells didn’t always respond to their efforts to freeze neurite growth with a higher voltage. For more than another year the two researchers carefully experimented with voltages, varying the strength and the timing of the applied charge. We kept having problems with the reproducibility of the data, says Aizawa. After a while we started to have doubts about whether this phenomenon could be made reproducible.

Finally, though, in 1989, the scientists were ready to declare the experiments a success. The right voltage to freeze the nerve cells’ growth turned out to be .6, rather than the .4 that had done the job with other cells. The scientists were then able to produce slide after slide of PC12 nerve cells arrayed in alternating stripes: the cell-less stripes corresponding to bands of indium tin oxide that had been laced with .6 volt, while neighboring bands of plain glass boasted dense growths of interconnected nerve cells. The cells crowd the glass stripes to avoid the charged indium tin oxide stripes; even their neurites don’t cross over onto the electrodes. We were very surprised, says Aizawa. Even if the cell starts out on top of the electrode, after a few days it will be off to the side of it. I don’t know how it does that. I think maybe it rolls. Apparently, speculated Aizawa, a voltage of .6 is just enough to realign charged molecules on the surface of the cell into a shape that blocks the entry of NGF and thus prevents neurite growth.

For the past two years Aizawa has worked on refining his control over the growth of the nerve cells. He has now achieved a rough version of the sought-after neural strings, stripes of interconnected cells less than a thousandth of an inch wide. That’s the easy part, he shrugs. The hard part, on which Aizawa is now focusing, is to design an input and output to his string: that is, a way to introduce electronic signals into the string and to detect the resulting signals that emerge from the other end. The usual way to put signals into a neuron is by sticking a probe into it, he explains. But that kills the cell. I need a noninvasive, nondestructive technique.

Fortunately, he is already halfway there by virtue of having grown the cells on top of a semiconductor. He is now trying to develop a checkerboard grid of electrodes so that he can selectively stimulate the individual nerve cells on top of each square. The same electrodes could be used to extract the resulting signals from other nerve cells. I think I can do this, he says, but it will take two or three more years. Once he does, he can start to learn how to use signals to strengthen connections, the prerequisite to nerve programming. If that works, he could attempt a simple programmable neural network.

If his checkerboard chip proves able to perform rudimentary tasks such as recognizing simple patterns, the next step will be to try to build a three-dimensional structure of nerve cells capable of more complex functions. How would he do that? The same way nature handles it: by getting the neurons to arrange themselves. Our brain works by self-organization, he explains. I don’t know how to go about achieving this, but I hope to find a way. I think it will take more than ten years. If we succeed, though, we will be able to build at least a part of a brain. Such a bizarre device would, in theory, be able to learn much as our own brains do.

Even if Aizawa doesn’t make it as far as an artificial brain, his efforts won’t be wasted. I have already been approached by doctors who want to make an interface between the nervous system and prostheses, he says. My device could connect nerves in the shoulder with wires in an artificial arm. Or it could connect the optic nerve with a tiny camera to make an artificial eye.

On the other hand, once you’ve set off on the step-by-step path that ends when you bring a brain--even an artificial one--to life, stopping short of your goal would have to seem a little disappointing. Does Aizawa think he’ll succeed? I don’t know, he says. I hope.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!


Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Our List

Sign up for our weekly science updates.

To The Magazine

Save up to 70% off the cover price when you subscribe to Discover magazine.

Copyright © 2023 Kalmbach Media Co.