We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

The Quest to Build a Silicon Brain

An engineer's revolutionary new chip, inspired by how our own brains work, could turn computing on its head.

By Adam Piore
May 24, 2013 5:00 PMNov 12, 2019 6:41 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

The day he got the news that would transform his life, Dharmendra Modha, 17, was supervising a team of laborers scraping paint off iron chairs at a local Mumbai hospital. He felt happy to have the position, which promised steady pay and security — the most a poor teen from Mumbai could realistically aspire to in 1986. 

Modha’s mother sent word to the job site shortly after lunch: The results from the statewide university entrance exams had come in. There appeared to be some sort of mistake, because a perplexing telegram had arrived at the house. 

Modha’s scores hadn’t just placed him atop the city, the most densely inhabited in India — he was No. 1 in math, physics and chemistry for the entire province of Maharashtra, population 100 million. Could he please proceed to the school to sort it out? 

Back then, Modha couldn’t conceive what that telegram might mean for his future. Both his parents had ended their schooling after the 11th grade. He could count on one hand the number of relatives who went to college. 

But Modha’s ambitions have expanded considerably in the years since those test scores paved his way to one of India’s most prestigious technical academies, and a successful career in computer science at IBM’s Almaden Research Center in San Jose, Calif. 

Recently, the diminutive engineer with the bushy black eyebrows, closely cropped hair and glasses sat in his Silicon Valley office and shared a vision to do nothing less than transform the future of computing. “Our mission is clear,” said Modha, now 44, holding up a rectangular circuit board featuring a golden square.

“We’d like these chips to be everywhere — in every corner, in everything. We’d like them to become absolutely essential to the world.” 

Traditional chips are sets of miniaturized electrical components on a small plate used by computers to perform operations. They often consist of millions of tiny circuits capable of encoding and storing information while also executing programmed commands. 

Modha’s chips do the same thing, but at such enormous energy savings that the computers they comprise would handle far more data, by design. With the new chips as linchpin, Modha has envisioned a novel computing paradigm, one far more powerful than anything that exists today, modeled on the same magical entity that allowed an impoverished laborer from Mumbai to ascend to one of the great citadels of technological innovation: the human brain. 

Turning to Neuroscience

The human brain consumes about as much energy as a 20-watt bulb — a billion times less energy than a computer that simulates brainlike computations. It is so compact it can fit in a two-liter soda bottle. Yet this pulpy lump of organic material can do things no modern computer can. 

Sure, computers are far superior at performing pre-programmed computations — crunching payroll numbers or calculating the route a lunar module needs to take to reach a specific spot on the moon. But even the most advanced computers can’t come close to matching the brain’s ability to make sense out of unfamiliar sights, sounds, smells and events, and quickly understand how they relate to one another. 

Nor can such machines equal the human brain’s capacity to learn from experience and make predictions based on memory. 

Five years ago, Modha concluded that if the world’s best engineers still hadn’t figured out how to match the brain’s energy efficiency and resourcefulness after decades of trying using the old methods, perhaps they never would. 

So he tossed aside many of the tenets that have guided chip design and software development over the past 60 years and turned to the literature of neuroscience. Perhaps understanding the brain’s disparate components and the way they fit together would help him build a smarter, more energy-efficient silicon machine. 

These efforts are paying off. Modha’s new chips contain silicon components that crudely mimic the physical layout of, and connections between, microscopic carbon-based brain cells. Modha is confident that his chips can be used to build a cognitive computing system on the scale of a human brain for only 100 times more power, making it 10 million times more energy efficient than the computers of today.

Already, Modha’s team has demonstrated some basic capabilities. Without the help of a programmer explicitly telling them what to do, the chips they’ve developed can learn to play the game Pong, moving a bar along the bottom of the screen and anticipating the exact angle of a bouncing ball. They can also recognize the numbers zero through nine as a lab assistant scrawls them on a pad with an electronic pen.

Of course, plenty of engineers have pulled off such feats — and far more impressive ones. An entire subspecialty known as machine learning is devoted to building algorithms that allow computers to develop new behaviors based on experience. Such machines have beaten the world’s best minds in chess and Jeopardy!

But while machine learning theorists have made progress in teaching computers to perform specific tasks within a strict set of parameters — such as how to parallel park a car or plumb encyclopedias for answers to trivia questions — their programs don’t enable computers to generalize in an open-ended way. 

Modha hopes his energy-efficient chips will usher in change. “Modern computers were originally designed for three fundamental problems: business applications, such as billing; science, such as nuclear physics simulation; and government programs, such as Social Security,” Modha states. 

The brain, on the other hand, was forged on the crucible of evolution to quickly make sense of the world around it and act upon its conclusions. “It has the ability to pick out a prowling predator in huge grasses, amid a huge amount of noise, without being told what it is looking for. It isn’t programmed. It learns to escape and avoid the lion.”

Machines with similar capabilities could help solve one of mankind’s most pressing problems: the overload of information. Between 2005 and 2012, the amount of digital information created, replicated and consumed worldwide increased over 2,000 percent — exceeding 2.8 trillion gigabytes in 2012. 

By some estimates, that’s almost as many bits of information as there are stars in the observable universe. The arduous task of writing the code that instructs today’s computers to make sense of this flood of information — how to order it, analyze it, connect it, what to do with it — is already far outstripping the abilities of human programmers. 

Cognitive computers, Modha believes, could plug the gap. Like the brain, they will weave together inputs from multiple sensory streams, form associations, encode memories, recognize patterns, make predictions and then interpret, perhaps even act — all using far less power than today’s machines. 

Drawing on data streaming in from a multitude of sensors monitoring the world’s water supply, for instance, the computer might learn to recognize changes in pressure, temperature, wave size and tides, then issue tsunami warnings, even though current science has yet to identify the constellation of variables associated with the monster waves. 

Brain-based computers could help emergency department doctors render elusive diagnoses even when science has yet to recognize the collection of changes in body temperature, blood composition or other variables associated with an underlying disease. 

“You will still want to store your salary, your gender, your Social Security number in today’s computers,” Modha says. “But cognitive computing gives us a complementary paradigm for a radically different kind of machine.”

Lighting the Network

Modha is hardly the first engineer to draw inspiration from the brain. An entire field of computer science has grown out of insights derived from the way the smallest units of the brain — cells called neurons — perform computations. 

It is the firing of neurons that allows us to think, feel and move. Yet these abilities stem not from the activity of any one neuron, but from networks of interconnected neurons sending and receiving simple signals and working in concert with each other. 

The potential for brainlike machines emerged as early as 1943, when neurophysiologist Warren McCulloch and mathematician Walter Pitts proposed an idealized mathematical formulation for the way networks of neurons interact to cause one another to fire, sending messages throughout the brain.

In a biological brain, neurons communicate by passing electrochemical signals across junctions known as synapses. Often the process starts with external stimuli, like light or sound. If the stimulus is intense enough, voltage across the membrane of receiving neurons exceeds a given threshold, signaling neurochemicals to fly across the synapses, causing more neurons to fire and so on and so forth. 

When a critical mass of neurons fire in concert, the input is perceived by the cognitive regions of the brain. With enough neurons firing together, a child can learn to ride a bike and a mouse can master a maze. 

McCulloch and Pitts pointed out that no matter how many inputs their idealized neuron might receive, it would always be in one of only two possible states — activated or at rest, depending upon whether the threshold for excitement had been passed. 

Because neurons follow this “all-or-none law,” every computation the brain performs can be reduced to series of true or false expressions, where true and false can be represented by 1 and 0, respectively. Modern computers are also based on logic systems using 1s and 0s, with information coming from electric switches instead of the outside environment.

McCulloch and Pitts had captured a fundamental similarity between brains and computers. If endowed with the capacity to ask enough yes-or-no questions, either one should presumably eventually arrive at the solution to even the most complicated of questions. 

As an example, to draw a boundary between a group of red dots and blue dots, one might ask of each dot if it is red (yes/no) or blue (yes/no). Then one might ask if two neighboring pairs of dots are of differing colors (yes/no). With enough layers of questions and answers, one might answer almost any complex question at all. 

Yet this kind of logical ability seemed far removed from the capacity of brains, made of networks of neurons, to encode memories or learn. That capacity was explained in 1949 by Canadian psychologist Donald Hebb, who hypothesized that when two neurons fire in close succession, connections between them strengthen. “Neurons that fire together wire together” is the catchy phrase that emerged from his pivotal work.

Connections between neurons explain how narrative memory is formed. In a famous literary example, Marcel Proust’s childhood flooded back when he dipped a madeleine in his cup of tea and took a bite. The ritual was one he had performed often during childhood. When he repeated it years later, neurons fired in the areas of the brain storing these taste and motor memories. 

As Hebb had suggested, those neurons had strong physical connections to other neurons associated with other childhood memories. Thus when Proust tasted the madeleine, the neurons encoding those memories also fired — and Proust was flooded with so many associative memories he filled volumes of his masterwork, In Search of Lost Time.

Dharmendra Modha and team member Bill Risk stand by a supercomputer at the IBM Almaden facility. Using the supercomputers at Almaden and Lawrence Livermore National Laboratory, the group simulated networks that crudely approximated the brains of mice, rats, cats and humans.  | Majed Abolfazli

By 1960, computer researchers were trying to model Hebb’s ideas about learning and memory. One effort was a crude brain mock-up called the perceptron. The perceptron contained a network of artificial neurons, which could be simulated on a computer or physically built with two layers of electrical circuits. 

The space between the layers was said to represent the synapse. When the layers communicated with each other by passing signals over the synapse, that was said to model (roughly) a living neural net. One could adjust the strength of signals passed between the two layers — and thus the likelihood that the first layer would activate the second (much like one firing neuron activates another to pass a signal along). 

Perceptron learning occurred when the second layer was instructed to respond more powerfully to some inputs than others. Programmers trained an artificial neural network to “read,” activating more strongly when shown patterns of light depicting certain letters of the alphabet and less strongly when shown others. 

The idea that one could train a computer to categorize data based on experience was revolutionary. But the perceptron was limited: Consisting of a mere two layers, it could only recognize a “linearly separable” pattern, such as a plot of black dots and white dots that can be separated by a single straight line (or, in more graphic terms, a cat sitting next to a chair). But show it a plot of black and white dots depicting something more complex, like a cat sitting on a chair, and it was utterly confused. 

It wasn’t until the 1980s that engineers developed an algorithm capable of taking neural networks to the next level. Now programmers could adjust the weights not just between two layers of artificial neurons, but also a third, a fourth — even a ninth layer — in between, representing a universe where many more details could live. 

This expanded the complexity of questions such networks could answer. Suddenly neural networks could render squiggly lines between black and white dots, recognizing both the cat and the chair it was sitting in at the same time.

Out of Bombay

Just as the neural net revival was picking up steam, Modha entered India’s premier engineering school, the Indian Institute of Technology in Bombay. He graduated with a degree in computer science and engineering in 1990. 

As Modha looked to continue his education, few areas seemed as hot as the reinvigorated field of neural networks. In theory, the size of neural networks was limited only by the size of computers and the ingenuity of programmers. 

In one powerful example of the new capabilities around that time, Carnegie Mellon graduate student Dean Pomerleau used simulated images of road conditions to teach a neural network to interpret live road images picked up by cameras attached to a car’s onboard computer. Traditional programmers had been stumped because even subtle changes in angle, lighting or other variables threw off pre-programmed software coded to recognize exact visual parameters.

Instead of trying to precisely code every possible image or road condition, Pomerleau simply showed a neural network different kinds of road conditions. Once it was trained to drive under specific conditions, it was able to generalize to drive under similar but not identical conditions. 

Using this method, a computer could recognize a road with metal dividers based on its similarities to a road without dividers, or a rainy road based on its similarity to a sunny road — an impossibility using traditional coding techniques. After being shown images of various left-curving and right-curving roads, it could recognize roads curving at any angle. 

Other programmers designed a neural network to detect credit card fraud by exposing it to purchase histories of good versus fraudulent card accounts. Based on the general spending patterns found in known fraudulent accounts, the neural network was able to recognize the behavior and flag new fraud cases.

The neural networking mecca was San Diego — in 1987, about 1,500 people met there for the first significant conference on neural networking in two decades. And in 1991, Modha arrived at the University of California, San Diego to pursue his Ph.D. He focused on applied math, constructing equations to examine how many dimensions of variables certain systems could handle, and designing configurations to handle more. 

By the time Modha was hired by IBM in 1997 in San Jose, another computing trend was taking center stage: the explosion of the World Wide Web. Even back then, it was apparent that the flood of new data was overwhelming programmers. The Internet offered a vast trove of information about human behavior, consumer preferences and social trends. 

But there was so much of it: How did one organize it? How could you begin to pick patterns out of files that could be classified based on tens of thousands of characteristics? 

Current computers consumed way too much energy to ever handle the data or the massive programs required to take every contingency into account. And with a growing array of sensors gathering visual, auditory and other information in homes, bridges, hospital emergency departments and everywhere else, the information deluge would only grow. 

A Canonical Path

The more Modha thought about it, the more he became convinced that the solution might be found by turning back to the brain, the most effective and energy-efficient pattern recognition machine in existence. Looking to the neuroscientific literature for inspiration, he found the writings of MIT neuroscientist Mriganka Sur. 

Sur had severed the neurons connecting the eyes of newborn ferrets to the brain’s visual cortex; then he reconnected those same neurons to the auditory cortex. Even with eyes connected to the sound-processing areas of the brain, the rewired animals could still see as adults. 

To Modha, this revealed a fascinating insight: The neural circuits in Sur’s ferrets were flexible — as interchangeable, it seemed, as the back and front tires of some cars. Sur’s work implied that to build an artificial cortex on a computer, you only needed one design to create the “circuit” of neurons that formed all its building blocks. 

If you could crack the code of that circuit — and embody it in computation — all you had to do was repeat it. Programmers wouldn’t have to start over every time they wanted to add a new function to a computer, using pattern recognition algorithms to make sense of new streams of data. They could just add more circuits. 

“The beauty of this whole approach,” Modha enthusiastically explains, “is that if you look at the mammalian cerebral cortex as a road map, you find that by adding more and more of these circuits, you get more and more functionality.”

In search of a master neural pattern, Modha discovered that European researchers had come up with a mathematical description of what appeared to be the same as the circuit Sur investigated in ferrets, but this time in cats. 

If you unfolded the cat cortex and unwrinkled it, you would find the same six layers repeated again and again. When connections were drawn between different groups of neurons in the different layers, the resulting diagrams looked an awful lot like electrical circuit diagrams.

Modha and his team began programming an artificial neural network that drew inspiration from these canonical circuits and could be replicated multiple times. The first step was determining how many of these virtual circuits they could they link together and run on IBM’s traditional supercomputers at once. 

Would it be possible to reach the scale of a human cortex? 

At first Modha and his team hit a wall before they reached 40 percent of the number of neurons present in the mouse cerebral cortex: roughly 8 million neurons, with 6,300 synaptic connections apiece. The truncated circuitry limited the learning, memory and creative intelligence their simulation could achieve. 

So they turned back to neuroscience for solutions. The actual neurons in the brain, they realized, only become a factor in the organ’s overall computational process when they are activated. When inactive, neurons simply sit on the sidelines, expending little energy and doing nothing. So there was no need to update the relationship between 8 million neurons 1,000 times a second. Doing so only slowed the system down. 

Instead, they could emulate the brain by instructing the computer to focus attention only on neurons that had recently fired and were thus most likely to fire again. With this adjustment, the speed at which the supercomputer could simulate a brain-based system increased a thousandfold. By November 2007, Modha had simulated a neural network on the scale of a rat cortex, with 55 million neurons and 442 billion synapses. 

Two years later his team scaled it up to the size of a cat brain, simulating 1.6 billion neurons and almost 9 trillion synapses. Eventually they scaled the model up to simulate a system of 530 billion neurons and 100 trillion synapses, a crude approximation of the human brain.

Building a Silicon Brain

The researchers had simulated hundreds of millions of repetitions of the kind of canonical circuit that might one day enable a new breed of cognitive computer. But it was just a model, running at a maddeningly slow speed on legacy machines that could never be brainlike, never step up to the cognitive plate. 

In 2008, the federal Defense Advanced Research Projects Agency (DARPA) announced a program aimed at building the hardware for an actual cognitive computer. The first grant was the creation of an energy-efficient chip that would serve as the heart and soul of the new machine — a dream come true for Modha. 

With DARPA’s funding, Modha unveiled his new, energy-efficient neural chips in summer 2011. Key to the chips’ success was their processors, chip components that receive and execute instructions for the machine. Traditional computers contain a small number of very fast processors (modern laptops usually have two to four processors on a single chip) that are almost always working. Every millisecond, these processors scan millions of electrical switches, monitoring and flipping thousands of circuits between two possible states, 1 and 0 — activated or not. 

To store the patterns of ones and zeros, today’s computers use a separate memory unit. Electrical signals are conveyed between the processor and memory over a pathway known as a memory bus. Engineers have increased the speed of computing by shortening the length of the bus. 

Some servers can now loop from memory to processor and back around a few hundred-million times per second. But even the shortest buses consume energy and create heat, requiring lots of power to cool. 

The brain’s architecture is fundamentally different, and a computer based on the brain would reflect that. Instead of a small number of large, powerful processors working continuously, the brain contains billions of relatively slow, small processors — its neurons — which consume power only when activated. And since the brain stores memories in the strength of connections between neurons, inside the neural net itself, it requires no energy-draining bus.

The processors in Modha’s new chip are the smallest units of a computer that works like the brain: Every chip contains 256 very slow processors, each one representing an artificial neuron (By comparison, a roundworm brain consists of about 300 neurons.) Only activated processors consume significant power at any one time, making energy consumption low. 

But even when activated, the processors need far less power than their counterparts in traditional computers because the tasks they are designed to execute are far simpler: Whereas a traditional computer processor is responsible for carrying out all the calculations and operations that allow a computer to run, Modha’s tiny units only need to sum up the number of signals received from other virtual neurons, evaluate their relative weights and determine whether there are enough of them to prompt the processor to emit a signal of its own. 

Modha has yet to link his new chips and their processors in a large-scale network that mimics the physical layout of a brain. But when he does, he is convinced that the benefits will be vast. Evolution has invested the brain’s anatomy with remarkable energy efficiencies by positioning those areas most likely to communicate closer together; the closer neurons are to one another, the less energy they need to push a signal through. By replicating the big-picture layout of the brain, Modha hopes to capture these and other unanticipated energy savings in his brain-inspired machines.

He has spent years poring over studies of long-distance connections in the rhesus macaque monkey brain, ultimately creating a map of 383 different brain areas, connected by 6,602 individual links. The map suggests how many cognitive computing chips should be allocated to the different regions of any artificial brain, and which other chips they should be wired to. 

For instance, 336 links begin at the main vision center of the brain. An impressive 1,648 links emerge from the frontal lobe, which contains the prefrontal cortex, a centrally located brain structure that is the seat of decision-making and cognitive thought. As with a living brain, the neural computer would have most connections converging on a central point. 

Of course, even if Modha can build this brainiac, some question whether it will have any utility at all. Geoff Hinton, a leading neural networking theorist, argues the hardware is useless without the proper “learning algorithm” spelling out which factors change the strength of the synaptic connections and by how much. Building a new kind of chip without one, he argues, is “a bit like building a car engine without first figuring out how to make an explosion and harness the energy to make the wheels go round.” 

But Modha and his team are undeterred. They argue that they are complementing traditional computers with cognitive-computing-like abilities that offer vast savings in energy, enabling capacity to grow by leaps and bounds. The need grows more urgent by the day. By 2020, the world will generate 14 times the amount of digital information it did in 2012. Only when computers can spot patterns and make connections on their own, says Modha, will the problem be solved. 

Creating the computer of the future is a daunting challenge. But Modha learned long ago, halfway across the world as a teen scraping the paint off of chairs, that if you tap the power of the human brain, there is no telling what you might do.

[This article originally appeared in print as "Mind in the Machine."] 

Inside Modha's Neural Chip

A circuit board containing Modha’s golden neural chip is shown at right. Within the square of gold is a smaller, golden rectangle the size of a grain of rice. The rectangle is the core that houses the electronic equivalent of biological nerve cells, or neurons. 

In living neurons, electrochemical signals travel down a long, slender stalk, called an axon, to protrusions called dendrites. The signals leap from the axons across a synapse, or gap, to the dendrites of the next nerve cell in the neural net. 

The process is emulated in Modha’s silicon chip, where 256 processors in the core each serve as an artificial neuron that receives signals from its own “dendrite line.” (See magnified grid, far right.) Those lines are arranged parallel to one another but perpendicular to the signal-sending “axon lines.” 

Within the grid, each axon-dendrite intersection is a synapse — analogous to a biological synapse — that shunts impulses from axon lines to all processors. In neurons, the signal crosses the synapse if it is intense enough. In Modha’s golden core, each processor counts up the signals it receives from incoming axons by way of the dendrites. 

If a certain threshold is exceeded, the processor sends out its own signal, or spike. Spikes are routed via the green circuit board to an external computer for data collection before being sent back to the chip. 

— Fangfei Shen

IBM Research

Mapping the Monkey Brain

To gain more insight into neural computing, Modha has mapped the brain of the rhesus macaque monkey, which is similar to our own. He later used the map to simulate a human-scale brain on an IBM supercomputer. 

The macaque-derived map models the core — the prefrontal cortex and other parts of the brain involved in consciousness, cognition and higher thought. Communication throughout the network is mainly conducted through the core, where 88 percent of all connections start or end. 

In a computer, such connectivity could be implemented through a network of golden chips spanning the entire system. What Modha finds most interesting is that the macaque’s innermost core appears to include two brain networks found in humans — one that activates introspective thought and another that activates goal-oriented action, suggesting a special role in consciousness. 

To take on that role in a machine, the core would have to connect to the other parts of the brain, diagrammed as a network at right. There you can see the cerebral cortex, the center of memory and intellect, and its four major parts: the frontal lobe, associated with higher cognition and expressive language; the parietal lobe, associated with processing pressure, touch and pain; and the occipital and temporal lobes for processing vision and sound, respectively. 

Hoping to emulate the macaque, Modha has also included circuits to represent the basal ganglia, which controls movement and motivation, and the insula and cingulate, both involved in processing emotion, among many others for a total of 383 regions connected by 6,602 individual links. 

— Fangfei Shen

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.