An image being evolved over hundreds of trials as a monkey neuron selects what it likes best. (Credit: Ponce, Xiao, and Schade et al./Cell) Impressionist art, or perhaps nightmare fuel — these images are a confusing mess to the human eye. But to a macaque's brain cells, says a group of researchers, the images are fascinating. The pictures are the result of an experiment that paired artificial intelligence with primate intelligence. The goal was to create images specifically tuned to stimulate neurons in a monkey's visual cortex. It's not an attempt to create monkey-centric art. Instead, the jumbled images might help make sense of the way our brains see the world around us. And the researchers say these renderings are even more potent than natural images in provoking monkey's brains to respond.
How Neurons See
Scientists don't totally understand the process that turns incoming photons into coherent images in our minds. What we do know is that our brains have multiple layers of neurons for visual processing, each with their own task. As the neural signal for a particular image passes through these layers it is gradually sculpted into a coherent representation. Exactly how this happens, though, is still a bit of a mystery. So, to narrow down the problem, researchers from Harvard University and the Washington University School of Medicine, focused in as far as they could and looked at individual neurons. They were working in a piece of the visual processing system called the inferior temporal cortex (IT cortex). The IT cortex comes into play toward the end of the visual processing assembly line, and its main job seems to be recognizing objects. This function has been known about for quite some time, actually, thanks to patients who've sustained damage to their IT cortex. "If you lose that part of your brain ... you can see, but you can't recognize things. You have what's called an agnosia — a very selective loss of being able to recognize particular classes of objects," said Margaret Livingstone, a Harvard neurobiologist and co-author of the paper. By watching how individual neurons in the IT cortex responded to an image, the researchers could get a feel for what that specific neuron was attuned to. Previous experiments have shown that cells, or clusters of cells, in the IT cortex respond strongly to things like faces or hands. But that assumption is based on educated guesses, as researchers are limited in the number of things they can "show" to a neuron to see if it will react. Perhaps some neurons are attuned to hands, but would respond even more strongly to an octopus — or even an image that doesn't occur in the real world.
An image evolved by a macaque neuron with the help of artificial intelligence. (Credit: Ponce Xiao and Schade et al./Cell)
To get around the problem, the researchers turned to artificial intelligence. They showed macaques with electrodes implanted in their IT cortex a set of 40 randomly generated images showing abstract patterns and watched to see which stimulated their neurons the most. The 10 that did the job best were kept and used to generate a new set of images. This went on for a few hundred rounds, or generations, depending on how long the macaques could be convinced to pay attention to the screen. By the end of the experiment, published in Cell, the researchers had a set of images that had been evolved through many generations to depict what specific neurons or neuron groups like best. The shapes depicted are messy, though recognizable traits emerge: what looks like a monkey face in one, the masked and gowned figure of a lab member in another. The features looked warped, though, a caricature of what a monkey or a human being looks like. Livingstone says it reveals an important insight into how neurons code for, or recognize, objects. "You might assume that a cell that’s going to care about faces, the optimum stimulus would look like a face," she says. "Instead, it looks like a gnome, or a gargoyle, or a leprechaun. So that tells us that neurons are coding extremes, not typical things." It hints that when our brains are picking out objects like faces, they do so not by storing an image of every single permutation of a human face, but instead simply encode for opposite ends of the face spectrum. Determining where a particular face falls between two extremes is an easier way for our brains to identify things. "Your brain is probably full of neurons that are coding for things you never actually see, but they're coding for how things differ from everything else," Livingstone says. The result is that some neurons will respond most readily to things that we'd never see in the real world. The images from the experiment bear that out — they are not an accurate depiction of our world, but a skewed representation of it. What was surprising to Livingstone, though, was how complex of an object individual neurons could recognize. A face is a complex object, composed of multiple features: eyes, nose, mouth, etc. One theory about visual processing held that individual neurons might encode for simple features and work together to construct the image of a face. But single neurons guided the algorithm to construct an entire face, an indication that our brain cells can recognize complex objects individually. That doesn’t mean, of course, that there’s a single neuron in our brain for each face we know, or that hand recognition comes down to a single brain cell. Our neurons are linked together into networks that work together to make sense of the visual world around us. But Livingstone’s work reinforces that single neurons can conduct complex, specific tasks, and that we can find what those tasks are. To us, the art dreamed up by these individual neurons might not make much sense, but that’s only to be expected — they are but one voice in a much larger chorus.