Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Decoding Faces from the Brain

Neuroskeptic iconNeuroskepticBy NeuroskepticJune 10, 2016 4:31 PM

Newsletter

Sign up for our email newsletter for the latest science news

In a fascinating new paper, researchers Hongmi Lee and Brice A. Kuhl report that they can decode faces from neural activity. Armed with a brain scanner, they can reconstruct which face a participant has in mind. It's a cool technique that really seems to fit the description of 'mind reading' - although the method's accuracy is only modest. Here's how they did it. Lee and Kuhl started out with a set of over 1000 color photos of different faces. During an fMRI scan, these images were shown to partcipants one after the other and the neural responses were recorded. The set of faces was then decomposed into 300 eigenfaces using the technique of principle component analysis (PCA). Each eigenface represents some statistical aspect of the data The neural activity associated with each eigenface was then determined in a machine learning step (see A, below.)

lee-kuhl.png

Now for the mind-reading bit: Lee and Kuhl presented participants with test faces that they'd never seen before (i.e. novel faces not in the training dataset). The neural responses to the new faces were analysed to work out the eigenfaces that were activated in the partcipants' brains (B, above). By summing these predicted eigenfaces, a 'reconstructed' face could be created. These eigenfaces were significantly more similar to the actual test faces than would be expected from chance. Here we can see some original test faces along with the reconstructions based on activity in the occipitotemporal cortex (OTC) of the brain during face perception:

reconstructions-1.png

It gets better. As well as reconstructing faces from neural activity during perception of the faces (which has been done before), Lee and Kuhl examined neural activity when faces weren't actually on the screen at all. Participants were given a face recall task, in which they had to hold a face in memory. It turns out that these remembered faces could be reconstructed too, based on the neural activity in a memory-related brain region, the angular gyrus (ANG). The reconstructions produced by this technique aren't perfect, but they do seem to capture some of the major features of the original faces. These scatter-plots show the correlation between properties of the original and reconstructed images, as rated by a panel of observers:

face_features.png

The reconstructions from the perceptual OTC were generally better than the memory ANG, but in both regions, the reconstructed faces showed similarity to the originals, not only in the rather basic dimension of skin color, but also in terms of the percieved dominance and trustworthiness of the faces.

rb2_large_white.png

Lee H, & Kuhl BA (2016). Reconstructing Perceived and Retrieved Faces from Activity Patterns in Lateral Parietal Cortex. The Journal of Neuroscience, 36 (22), 6069-82 PMID: 27251627

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 70%

    Subscribe

    Already a subscriber? Register or Log In