We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Decoding Faces from the Brain

Neuroskeptic iconNeuroskeptic
By Neuroskeptic
Jun 10, 2016 4:31 PMNov 19, 2019 8:53 PM

Newsletter

Sign up for our email newsletter for the latest science news
 

In a fascinating new paper, researchers Hongmi Lee and Brice A. Kuhl report that they can decode faces from neural activity. Armed with a brain scanner, they can reconstruct which face a participant has in mind. It's a cool technique that really seems to fit the description of 'mind reading' - although the method's accuracy is only modest. Here's how they did it. Lee and Kuhl started out with a set of over 1000 color photos of different faces. During an fMRI scan, these images were shown to partcipants one after the other and the neural responses were recorded. The set of faces was then decomposed into 300 eigenfaces using the technique of principle component analysis (PCA). Each eigenface represents some statistical aspect of the data The neural activity associated with each eigenface was then determined in a machine learning step (see A, below.)

Now for the mind-reading bit: Lee and Kuhl presented participants with test faces that they'd never seen before (i.e. novel faces not in the training dataset). The neural responses to the new faces were analysed to work out the eigenfaces that were activated in the partcipants' brains (B, above). By summing these predicted eigenfaces, a 'reconstructed' face could be created. These eigenfaces were significantly more similar to the actual test faces than would be expected from chance. Here we can see some original test faces along with the reconstructions based on activity in the occipitotemporal cortex (OTC) of the brain during face perception:

It gets better. As well as reconstructing faces from neural activity during perception of the faces (which has been done before), Lee and Kuhl examined neural activity when faces weren't actually on the screen at all. Participants were given a face recall task, in which they had to hold a face in memory. It turns out that these remembered faces could be reconstructed too, based on the neural activity in a memory-related brain region, the angular gyrus (ANG). The reconstructions produced by this technique aren't perfect, but they do seem to capture some of the major features of the original faces. These scatter-plots show the correlation between properties of the original and reconstructed images, as rated by a panel of observers:

The reconstructions from the perceptual OTC were generally better than the memory ANG, but in both regions, the reconstructed faces showed similarity to the originals, not only in the rather basic dimension of skin color, but also in terms of the percieved dominance and trustworthiness of the faces.

Lee H, & Kuhl BA (2016). Reconstructing Perceived and Retrieved Faces from Activity Patterns in Lateral Parietal Cortex. The Journal of Neuroscience, 36 (22), 6069-82 PMID: 27251627

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.