(Credit: Djomas/Shutterstock) When you see a great big smile, you know that someone is happy. Pretty simple, right? Such an inference is less a product of deductive reasoning and more like an instinctual reaction — we just know what certain facial expressions mean, we don't have to think about it. And researchers from Ohio State University say they've pinpointed the region of the brain that goes to work whenever we are confronted with raised eyebrows, wrinkled noses, taut lips and other facial contortions. Located in the back, right-hand side of the brain, the small area is called the posterior superior temporal sulsus (pSTS), and researchers say it helps us process facial expressions.
What's in a Smile?
The researchers based their study on fMRI scans of 10 participants who were shown over 1,000 different faces expressing one of seven emotions. The researchers watched their brains to see which parts received more blood flow while looking at faces — the more blood flowing to a part of the brain, the more activity. And when every participant looked at faces during the test, more blood flowed to their pSTS. Previous studies had established a link between this region and recognizing facial expressions, but the researchers took it a step further here, demonstrating that the pSTS is able to detect and differentiate individual movements of the face. In addition, they were able to show that this region acts the same for everyone, meaning that, for this part of the brain at least, we process happy smiles and disgusted frowns the same way as everyone else. Processing emotions is of course a complex activity, involving whole swathes of the brain. But, for the first time, scientists have accurately found a key step in that process by isolating just which region tells us that a smile is smile and a frown a frown. They published their work Tuesday in the Journal of Neuroscience. As part of their experiment, the researchers broke expressions down into discrete movements of facial muscles, called action units. One action unit, for example, was raising the middle of the eyebrow, and the overlapping combinations of these facial movements give us a mosaic of facial expressions. By comparing how participants' brains looked while observing action units, the researchers not only isolated the pSTS as the locus of facial recognition, they also discerned patterns in the region that corresponded to each facial movement.
One of the images shown to study participants. Several "Action Units," or facial movements, are shown. (Credit: The Ohio State University)
Modeling the Brain
With this information in hand, the researchers trained a computer algorithm to recognize different patterns of brain activity and predict the facial expression a person was seeing, based only on how their brain responded. They were able to achieve a 60 percent success rate, fairly meaningful when considering that a random guess would only be accurate about 15 percent of the time. They also compared their results from the pSTS with results from an analysis of the whole brain, and found that expanding their search did not reliably improve their success rate, indicating that the pSTS alone is largely responsible for representing these facial action units. Furthermore, their computer model worked with data from each of the ten participants — showing that our brains process facial expressions in the same way. A smile to my brain is a smile to your brain, in other words. Still, what that smile actually means for an individual varies significantly. The researchers found that their model couldn't predict how people interpreted a smile, only that they were seeing a smile. Their work could be important for people with conditions like autism, who have difficulty processing emotions, and who have been found to have reduced activity in the pSTS. A greater understanding of how the brain translates physical signals to abstract emotions would help to understand and treat such disorders.