In the prologue to his 2020 book, The Alignment Problem: Machine Learning and Human Values, Brian Christian tells the story of the beginnings of the idea of artificial neural networks. In 1942, Walter Pitts, a teenage mathematician and logician, and Warren McCulloch, a mid-career neurologist, teamed up to unravel the mysteries of how the brain worked. It was already known that neurons fire or do not fire due to an activation threshold.
"If the sum of the inputs to a neuron exceeded this activation threshold, then the neuron would fire; otherwise, it would not fire," explains Christian.
McCulloch and Pitts immediately saw the logic in the activation threshold — that the pulse of the neuron, with its on and off states, was a kind of logic gate. In the 1943 paper that came out of their early collaboration, they wrote, "Because of the 'all-or-none' character of nervous activity, neural events and the relations among them can be treated by means of propositional logic." The brain, they realized, was a kind of cellular machine, says Christian, "with the pulse or its absence signifying on or off, yes or no, true or false. This was really the birthplace of neural networks."
A Model of the Brain, Not a Copy
So artificial intelligence (AI) was inspired by the human brain, but how much is it really like the brain? Yoshua Bengio, a pioneer in deep learning and artificial neural networks, is careful to point out that AI is a model of what's going on in the brain, not a copy.
"A lot of inspiration from the brain went into the design of neural networks as they're used now," says Bengio, professor of computer science at the University of Montreal and scientific director of the MILA-Quebec AI Institute, "but the systems we have built are also very different from the brain in many ways." For one thing, he explains, state-of-the-art AI systems don't use pulses but rather floating point numbers. "People on the engineering side don't care to try to reproduce anything in the brain," he says. "They just want to do something that's going to work."
Read More: The Pros and Cons of Artificial Intelligence
But as Christian noted, what works in artificial neural networks is remarkably similar to what works in biological neural networks. While agreeing that these programs aren't exactly like the brain, Randall O'Reilly says, "Neural network models are a closer fit to what the brain is actually doing than to a purely abstract description at the computational level."
O'Reilly is a neuroscientist and computer scientist at the University of California Davis. "The units in these models are doing something like what actual neurons do in the brain," he says. "It's not just an analogy or a metaphor. There really is something shared at that level."
Similar to Artificial Intelligence
The newer transformer architecture that powers large language models, such as GPT3 and ChatGPT, is even more similar to the brain in some ways than previous models. These newer systems says O'Reilly, are mapping how different areas of the brain work, not just what an individual neuron is doing. But it's not a direct mapping; it's what O'Reilly calls a "re-mix" or a "mash-up."
The brain has separate areas, such as the hippocampus and the cortex, each of which specializes in a different form of computation. The transformer, says O'Reilly, blends those two together. "I picture it as a sort of puree of the brain," he says. This puree is spread through every part of the network and does some hippocampus-like things and some cortex-like things.
O'Reilly likens the generic neural networks that preceded the transformers to the posterior cortex, which is involved in perception. When the transformers arrived, they added some functions similar to those of the hippocampus, which, he explains, is good at storing and retrieving detailed facts — for example, what you ate for breakfast or the route you take to get to work. But instead of having a separate hippocampus, the entire AI system is like one massive — pureed — hippocampus.
Whereas a standard computer has to look up information by its address in memory or some sort of tag, the neural net can automatically retrieve information based on prompts (what did you have for breakfast?). This is what O'Reilly calls the "superpower" of neural networks.
Yet, the Brain is Different
The similarities between human brains and neural nets are striking, but the differences are, perhaps, profound. One way these models differ from the human brain, says O'Reilly, is that they don't have the essential ingredient for consciousness. He and others working in this area posit that in order to have consciousness, neurons must have a back-and-forth conversation.
"The essence of consciousness is really that you have some sense of the state of your brain," he says, and getting that takes bidirectional connectivity. However, all existing models have only one-way conversations among AI neurons. O'Reilly is working on it, though. His research deals with just this kind of bidirectional connectivity.
Not all attempts at machine learning have been based on neural networks, but the most successful ones have. And that probably shouldn't be surprising. Over billions of years, evolution found the best way to create intelligence. Now we're rediscovering and adapting those best practices, says Christian.
"It's no accident, no mere coincidence," he says, "that the most biologically inspired models have turned out to be the best performing."