The smartphone in your pocket owes a lot to someone you’ve probably never heard of—a Silicon Valley engineer named Jeff Hawkins. In 1996 he released the PalmPilot, the first commercially successful handheld computing device, then followed it up six years later with the Treo, one of the world’s first smartphones. More recently, Hawkins has focused his talents on an altogether different effort: tracing the architecture of the neocortex, a brain region involved in conscious thought, and using this information to develop new intelligent machines.
During the last decade, Hawkins opened a nonprofit neuroscience institute, then began working on a computer algorithm that could match the brain’s extraordinary ability to digest reams of data, discern patterns, and make predictions. In 2005 he founded the company Numenta, which will soon release its first product, a program that he predicts will inaugurate smart, brain-inspired machines that learn from experience and make novel predictions about the world.
Hawkins is also a sharp critic of today’s brain science. According to him, the field lacks the big ideas needed to tie together the voluminous data already available about brain function. Discover contributing editor Adam Piore picked Hawkins’s brain for details.
Discover: People often compare the brain to a computer. You flip the analogy, proposing to build machines based on the principles of the brain. Why do you think that’s the better approach?
Jeff Hawkins: A computer is a system for executing instructions. A programmer writes the instructions that tell the computer what to do. There are no instructions in your brain. No one programs it. Instead, it is a sophisticated memory system that learns through exposure to the world by means of its senses. Humans have a very large brain, and that’s why it takes almost 20 years to train it. At Numenta we use computers to emulate the brain’s memory system. Our models, which are equivalent to a small slice of the neocortex, typically have about 60,000 simulated neurons. We feed sensory data into this model, and it starts to learn. We train it just like you train a child. It starts forming connections between these modeled neurons, in modeled synapses. After sufficient training, it starts recognizing patterns, starts making predictions, and starts understanding what’s going on in the data.
How does your simulated brain differ from traditional approaches to artificial intelligence, or AI? They also use memory, don’t they?
JH: The biggest difference between brains and artificial intelligence is how knowledge is represented in memory. In classic AI, a programmer decides what kind of information has to be stored and how to represent it in the computer’s memory. The computer’s memory is “flat” or “linear” and by itself doesn’t help organize or understand the information; that is up to the programmer. But the brain doesn’t store information that way. The physical structure of the brain’s memory is intimately tied to how information is stored and what the information means. For example, memory in the brain is organized hierarchically. The physical structure of the brain is organized hierarchically in the way that neurons are wired to one another, and that physical structure reflects the memory architecture itself. The hierarchical structure dictates how information is stored. When you think, “What does my mother look like?” or “What does a dog look like?” or “How do cars behave?” that knowledge is not stored in one place. It’s stored in many small pieces at different levels of the hierarchy and reflects the hierarchical structure of the world itself.
If I asked you first to imagine your mother’s face and then to imagine my mother’s face, and I went into your brain and looked at the neurons that fired, I’d find that a lot of the neurons firing would be the same. Some would be different, and those different ones would represent the differences between those two faces. Similar patterns in the world result in similar activations of neurons. That is not at all what would happen in a traditional computer. In the computer world, the way a programmer represents your mother’s face is by some arbitrary set of numbers or bytes. The memory representation for your mother and the memory representation for my mother might be in completely different locations in the computer’s memory, and even if the faces looked similar, there might be no commonality to their representation. By looking at the computer’s memory, you couldn’t tell they looked similar. You couldn’t even tell they were both mothers.
What are the practical implications of your hierarchical approach to artificial intelligence?
JH: We’ve built a product at Numenta based on our brain model that we plan to launch later in the year. It’s very cool. You train it on streams of data, it builds a model of the data, and then makes predictions about the future of this data. You can imagine there are many business applications, such as trying to predict various things about health care, energy usage, market prices, and so on.
What can be done with your system that hasn’t been possible before?
JH: Suppose you’re interested in predicting energy usage in buildings. Knowing how much energy is going to be used by a building an hour or two in advance can save energy and money. The amount of energy that a building uses varies quite a bit hour by hour and day by day depending on many factors, such as the day of the week, the weather, how many people are in the building, and what’s going on in the neighborhood. We feed all this information into our product, and it learns to predict future energy usage. There are two big challenges to doing this. First, how do you build a model for not just one building but for every building in a city? Second, how do you make sure that the model changes when the world changes, such as at different hours of the day? Our product solves these two problems by having a system that learns versus one that is programmed.
That’s just one example. We’re in the very beginning of where this technology will lead us. Ultimately, learning systems based on brain architecture will have as big an impact on our world as computers have today.
What kind of impact? How might this kind of technology transform the future?
JH: Imagine a future where there are many billions of sources of data. Every phone, every car, every building, every website, every appliance will be capturing information. Some people call this the Internet of things. Brainlike systems will allow us to understand the patterns in those massive streams of data and act on those patterns. They will help us detect fraud, increase manufacturing efficiencies, and improve health care. Brainlike systems will be as pervasive as computers.
Now let’s go out 50 years from now. I have no doubt we can build brainlike systems that are larger and faster than human brains. This doesn’t imply we will have humanlike robots like you see in popular fiction. I don’t think that’s likely. Instead, intelligent systems will no longer just help us manage our everyday world. They will most likely be powerful enough to assist us with complicated research in physics, cosmology, and mathematics. Science is fundamentally about finding patterns and making predictions, and that is what the brain is good at. Imagine having a thousand supersmart physicists who can think a million times faster than a human, 24 hours a day, and assigning them to work on a problem. This is in the realm of possibility. Of course no one can know what the future holds, but for me, the most exciting aspect of intelligent machines is their potential for accelerating scientific discovery.
How will we work with these new brain-based computers? Will they be in charge, or will we?
JH: I think there’s a fantastic future. We will have lots of really smart machines at our disposal to help us. They will help us drive cars, improve manufacturing quality, make buildings more energy efficient, warn us of unexpected events, and better predict health-care outcomes. Intelligent machines will be constantly looking at the patterns in the world and helping us understand what is happening and predicting the future. They will achieve superhuman performance in many domains. Machine intelligence likely will be embedded in the Internet and become a pervasive part of our lives, but the same can be said of regular computers. The difference is that intelligent machines learn and adapt, whereas computers must be programmed and are relatively inflexible.
How did you make the leap from mobile computing to neuroscience? They seem worlds apart.
JH: Neuroscience is something I have been interested in for a long time. In 1979 I read an article by Francis Crick [co-discoverer of DNA’s double helix] in which he said we don’t understand how the brain works; we’re missing a framework to understand it. After reading that article I couldn’t think of anything more interesting to work on. All aspects of humanity, from our arts to technology to our cultures, are a product of brains. To have a profound lack of knowledge about ourselves seems kind of crazy.
I tried numerous times to study brains from a theoretical perspective, but I learned that you couldn’t get funded with that approach. So I took a few years off to earn a little money and figure out what to do next. That’s when I started Palm and then Handspring. We had a lot of success in mobile computing, but through all my years at Palm and Handspring, I was trying to return to working on brains. I talked about brains all the time. I went to neuroscience conferences. At one point I decided I’ve got to work on brain theory full time. That’s when I left the mobile computing industry.
Why replicate the brain using computers, instead of focusing on research questions?
JH: My primary interest is to understand how brains work. When I got out of mobile computing, I ran a pure neuroscience research institute. However, once you start understanding how the brain works, it is natural to build models to test the theories, and then practical applications start to emerge. I can see that brainlike systems will have tremendous societal benefits so I want to work on them and make them a reality.
Building products is also a good way of promoting the underlying science. The more success we have with the technology, the more people will learn the underlying neuroscience. This will accelerate the neuroscience, but it will have a larger societal value, too. I think it is important for all humans to learn how their brain works. Beyond the practical benefits there are philosophical, moral, and cultural benefits. If humanity is to survive for a long time, it may be necessary for all of us to understand how we learn, think, and form beliefs. Under certain conditions the brain is able to form false beliefs. Understanding how such beliefs are formed could help us all get along.
Do you foresee a day when we will be able to merge computers with the human brain?
JH: No, I don’t see us merging our brains with computers, at least not as it is portrayed in science fiction. There are many practical applications of enhancing brains with computers. Look at the cochlear implant, which allows some deaf people to hear. That is a brain implant, and it’s been a huge success. Similarly there are people working on tapping into the nervous system to help paralyzed patients control limbs that have been damaged. Maybe someone will invent a prosthetic eye. Where I become doubtful is when someone says, “We’ll put this implant in your head and you’re going to relay all your thoughts into the computer” or “You’re going to download all your brain’s connections into a computer.” These kinds of statements do not reflect how brains actually work.
[This article originally appeared in print as "Think Like Me."]