A paper just out makes the dramatic claim that you can control a robot using thought alone, Avatar style, thanks to a 'mind reading' MRI scanner. But does it really work?
Dutch neuroscientists Patrik Andersson and colleagues bought a robot - an off-the-shelf toy called the 'Spykee' - which is equipped with Wifi and a video camera. The controlling human lay in the scanner and real-time fMRI was used to record brain activity. The video feed from the robot was showed on a screen in the scanner, completing the human-robot loop.
Participants controlled the robot with their brain. Specifically, they had to focus their attention on one of three arrows - forward, left, and right - shown on the screen.
During an initial training phase they focussed on each arrow in turn, to provide examples of the resulting brain activity: these were then fed into a machine learning algorithm that learned to recognize the pattern of BOLD activation for each command. Then in the second phase, they could control the robot just by thinking about the correct arrow - the scanner 'decoded' their brain activity and sent the appropriate commands to the bot over Wifi.
None of the elements of this process are new - real time fMRI has been around for a few years, so has machine learning to decode brain activation - but it's the first time they've been put together in this way.
And it's pretty awesome. The participants were able to guide their 'avatar' around a room to visit a number of target locations. They weren't perfectly accurate, and it took 10 or 15 minutes to navigate a few meters of ground... but it worked.
However... were they really using their minds, or just their eyes?
This is my main concern about this paper: participants were told to keep their eyes focussed on the middle of the screen and just mentally focus on the arrows to give commands. If they did indeed keep their eyes entirely stationary, then the patterns of brain activation would indeed represent pure 'thoughts'.
But if they were moving their eyes slightly (even unconsciously), the interpretation would be rather different. Moving their eyes would change the pattern of light hitting their retina, and this would be expected to change brain activation in the visual system of the brain.
So, maybe the fancy fMRI decoding system wasn't reading their mind, it was just acting as an elaborate means of tracking eye movements - which would be much less interesting. If you want to control a robot with your eyes, there are cheaper ways.
Andersson et al acknowledge this issue, and they claim, for various reasons, that this probably wasn't what happened here - but they didn't measure eye movements directly, so it does remain a worry. Eye tracking devices suitable for fMRI are widely available but this study used an ultra-powerful 7 Tesla scanner which, the authors say, made it impossible. So there's more work to be done here.
Andersson P, Pluim JP, Viergever MA, and Ramsey NF (2012). Navigation of a Telepresence Robot via Covert Visuospatial Attention and Real-Time fMRI. Brain topography PMID: 22965825