A neat paper from Schoenmakers et al of the Dutch Donders Institute reports on Linear reconstruction of perceived images from human brain activity It introduces a new mathematical approach for decoding (or 'brain reading') the image that someone is looking at, pixel-by-pixel, based on the pattern of neural activity in their visual cortex. The results were not bad:
On the top row, you're looking at the actual letters shown to a volunteer during fMRI scanning. Beneath that, the estimated 'reconstructed' images, based purely on the corresponding brain activity. Here's where it gets crowd-pleasing and meta: in response to each of a certain six letters, the decoder estimated another output:
So you could say that we have a case of Brain Reading Reads "Brains" From A Reading Brain. Note, however, that in this case all of the stimuli were single letters in the set B,R,A,I,N,S, albeit written in a variety of fonts. So, although the decoder was attempting to reconstruct a raw image - not just pick one from a range of options as in many studies of this kind - it is perhaps no surprise that it always produced an output that had "lettery" features. The method (a linear Gaussian algorithm) seems novel, however, in that it's based on estimating the stimulus-response properties of each point (voxel) in the visual cortex. I get a feeling that it's less of a 'black box' than those other methods based on searching for whateverarrays of voxels happen to be associated with different stimuli.
Schoenmakers S, Barth M, Heskes T, & van Gerven MA (2013). Linear reconstruction of perceived images from human brain activity. NeuroImage PMID: 23886984