In the fall of 2003, a San Francisco art gallery exhibited my brain. As a conceptual artist, I work with ideas instead of oil paint, which sometimes leads to accusations that I’m not creating anything others can experience. To express my thoughts as directly as possible, I collaborated with University of California, San Francisco, neurologist Bruce Miller, who imaged my brain while I laid down in a medical scanner and contemplated beauty and truth.
The technique Miller used, called functional magnetic resonance imaging (fMRI), was developed in the 1990s to noninvasively measure mental activity. Blood flow inside the brain is tracked with strong magnetic pulses that interact with the iron in the blood’s hemoglobin, a protein that helps transport oxygen. Because circulation increases after neurons fire, the flow of blood reveals the flow of thought.
At least that’s the concept. When we put it into practice in 2003, my resulting scans showed scant detail. During my exhibit at Modernism Gallery, spectators squinted at the red blobs overlaying my gray matter, shook their heads in bafflement and replenished their glasses of chardonnay.
Sixteen years later, Miller has a similar reaction. Sitting across from me in his office at UCSF, he runs a finger over an image on his computer — one of the scans of my brain while I was thinking about truth. “It’s very poor resolution, but I think the theme I get here is … you are in deep reflection,” he says. “My sense is that this [scan] is outdated.”
He’s right. Technology has come a long way since 2003, and most fMRI machines in hospitals are now at least four times more powerful than the one we used. And in the last few years in particular, there have been significant advances in teasing out the sorts of things we can learn from fMRI.
My experience all those years ago has often made me think about the extent to which thoughts are observable, and what those observations can tell us about the ways our minds work. Now, I wonder whether these improvements mean fMRI can become a means of introspection, a tool for communication and even the mode of artistic expression I once envisioned. To see if all this is possible, I set out, old brain scans in hand, to speak to the scientists behind some of these advances.
“I can see which brain areas show peak activation,” says neuroscientist Yukiyasu Kamitani as he examines my scans at his Kyoto University laboratory. “But we no longer look at hot spots,” he says, referring to the red blobs representing maximum blood flow. “We look for patterns of brain activity, and these cannot be seen with the naked eye.”
Kamitani is one of the world’s leading experts on the visual cortex, the part of the brain that processes what the eyes see, and fMRI is one of his most penetrating instruments. He’s developed a method to decode brain scans that he compares to mind-reading. His technique, published in Nature Communications in 2017, allows him to reconstruct pictures shown to people inside an fMRI machine by documenting their brain’s reaction to the picture. Even more remarkably, he can reconstruct images that people imagine.
To accomplish this, Kamitani first uses a deep neural network — a form of artificial intelligence — to analyze brain activity while volunteers look at pictures of familiar objects such as umbrellas and passenger jets. The changes in blood flow occur across the visual cortex, making patterns that the AI learns to distinguish and to associate with categories such as airplane and qualities such as a jet’s silver cladding. After the deep neural network has been trained by comparing thousands of brain plots with the photos that triggered them, Kamitani serves up scans without the photographic prompts, letting the AI generate its own version of what the prompt might have been.
Many of these fabricated pictures are readily identifiable. The objects that people imagine, which are decoded in the same way, are vaguer but still intelligible. As a result of this study and subsequent research, Kamitani has also found that we process images layer by layer. When you look at something, you first take in the basic qualities, such as color. But when you imagine that same something when you’re not currently looking at it, the layers are processed differently. Imagination begins with generic categories, like table, and then is embellished with remembered details, like what material the table is made of. More than just showing objects of thought, Kamitani is observing how visualization works.
Once More, With Feeling
As spectacular as it might be to show other people what I’m imagining when I think about beauty and truth, the images would be incomplete if viewers couldn’t see how I felt about those things, too. Duke University neuroscientist Kevin LaBar tells me that’s also possible.
LaBar has created the first successful model for predicting people’s emotional state based on their fMRI scans, publishing his research in PLOS Biology in 2016. Like Kamitani, he doesn’t look for hot spots. Instead, he uses AI to correlate patterns of brain activity with subjective feelings.
LaBar trained his AI using music and movies. “The clips we chose have been previously shown to induce different emotions,” he explains. Playing sequences to people inside the machine, he labeled the brain scans with the emotions that the clips were known to elicit. On that basis, he trained his AI to identify blood flow patterns associated with seven basic feelings, ranging from surprise to amusement. To validate the computer model, he showed the AI a second batch of scans without identifying the emotions, and compared what the computer detected with the emotional tags on the associated clips.
The patterns were consistent enough across subjects that the AI can now predict the emotional state of people it’s never encountered — and not just their reactions to cheesy movies and music. For instance, the machine detects the fear people feel when they first enter the tight fMRI chamber. It even picks up on mood disorders such as depression.
From LaBar’s perspective, the fMRI can be used as a communications device, and not only for conceptual artists. Scans could diagnose the anxieties of people who have trouble expressing themselves emotionally, or monitor people objectively as they go through anger management counseling.
Nevertheless, LaBar is quick to point out that the AI’s picture remains fragmentary. There may be additional untested emotional states — more pieces to the puzzle. By dividing the problem into such pieces, scientists are creating a mosaic without knowing how their models will all fit together. Looking at my brain scans, and hearing about my attempt to express the abstract ideas of beauty and truth, LaBar has a suggestion: “I would apply Jack Gallant’s new semantic network analysis.”
Where Words Live
Gallant, a University of California, Berkeley, neuroscientist, has been building a kind of atlas of the brain for nearly a decade. One component of his atlas, first published in Nature in 2016, maps where the meanings of words are stored in the cerebral cortex, the outermost layer of the brain where many higher functions take place. The words he’s mapped run the gamut from body parts to numbers to principles such as truth. “Our method produces the most accurate and detailed map you can produce with fMRI,” he tells me. It reveals that every concept is stored in multiple places across both hemispheres of the brain. For example, the concept dog elicits brain activity in prefrontal, parietal and temporal cortices.
Gallant’s research began with recordings from The Moth Radio Hour, a popular show featuring personal stories recounted in front of a live audience. He had people rest inside an MRI machine, scanning their brains while they listened to these narratives. By doing so, he and his team created a vast data set indexing the areas where neurons fired in response to nearly 1,000 common English words. Then, he validated his model by having the computer predict patterns of brain activation as his subjects heard additional stories.
He found that every word is associated with multiple regions, which he hypothesizes is because each area processes an aspect of its meaning. To demonstrate, he inputs the words truth and beauty into his atlas. “The beauty regions represent visual and sensory concepts,” he says, “while the truth regions represent social concepts.”
Then he shows me the brains of his individual subjects, where regions associated with truth are colored red and beauty is shown in blue. If the similarities are what make his project meaningful scientifically, the differences are rich with potential for introspection and artistic expression. With another trip to the scanner, perhaps I’ll be able to show what truth and beauty mean to me, how I feel about them, even what images they evoke. It’s taken 16 years, but I no longer need to fret about learning how to paint.
Jonathon Keats is a conceptual artist and a contributing editor to Discover. This story originally appeared in print as "The Shape of Thought."