You've just finished doing some research using fMRI to measure brain activity. You designed the study, recruited the volunteers, and did all the scans. Phew. Is that it? Can you publish the findings yet?
Unfortunately, no. You still need to do the analysis, and this is often the trickiest stage. The raw data produced during an fMRI experiment are meaningless - in most cases, each scan will give you a few hundred almost-identical grey pictures of the person's brain. Making sense of them requires some complex statistics.
The very first step is choosing which software to use. Just as some people swear by Firefox while others prefer Internet Explorer for browsing the web, neuroscientists have various options to choose from in terms of image analysis software. Everyone's got a favourite. In Britain, the most popular are FSL (developed at Oxford) and SPM (London), while in the USA BrainVoyager sees a lot of use.
These three all do pretty much the same thing, give or take a few minor technical differences, so which one you use ultimately makes little difference. But just as there's more than one way to skin a cat, there's more than one way to analyze a brain. A paper from Fusar-Poli et al compares the results you get with SPM to the results obtained using XBAM, a program which uses a quite different statistical approach.
Here's what happened, according to SPM, when 15 volunteers looked at pictures of faces expressing the emotion of fear, and their brain activity was compared to when they were just looking at a boring "X" on the screen (I think - either that it's compared to looking at neutral faces; the paper isn't clear, but given the size of the blobs I doubt it's that.)
Various bits of the brain were more activated by the scared face pics, as you can see by the huge, fiery blobs. The activation is mostly at the back of the brain, in occipital cortex areas which deal with vision, which is as you'd expect. The cerebellum was also strongly activated, which is a bit less expected.
Now, here's what happens if you analyze exactly the same data using XBAM, setting the statistical threshold at the same level (i.e. in theory being no more or less "strict") -
You get the same visual system blobs, but you also see activation in a number of other areas. Or as Fusar-Poli et al put it -
Analysis using both programs revealed that during the processing of emotional faces, as compared to the baseline stimulus, there was an increased activation in the visual areas (occipital, fusiform and lingual gyri), in the cerebellum, in the parietal cortex [etc] ... Conversely, the temporal regions, insula and putamen were found to be activated using the XBAM analysis software only.
This begs two questions: why the difference, and which way is right?
The difference must be a product of the different methods used. By default SPM uses a technique called statistical parametric mapping (hence the name) based on the assumption of normality. FSL and BrainVoyager do too. XBAM, on the other hand, differs from more orthodox software in a number of other ways; the most basic difference is that it uses non-parametric statistics but this document lists no less than five major innovations
(Edit: although see the comments below this post)
"not to assume normality but to use permutation testing to construct the null distribution used to make inference about the probability of an "activation" under the null hypothesis."
"recognizing the existence of correlation in the residuals after fitting a statistical model to the data."
using "a mixed effects analysis of group level fMRI data by taking into account both intra and inter subject variances."
using "3D cluster level statistics based on cluster mass (the sum of all the statistical values in the cluster) rather than cluster area (number of voxels)."
using "a wavelet-based time series permutation approach that permitted the handling of complex noise processes in fMRI data rather than simple stationary autocorrelation."
Phew. Which combination of these are responsible for the difference is impossible to say.
The biggest question, though, is: should we all be using XBAM? Is it "better" than SPM? This is where things get tricky. The truth is that there's no right way to statistically analyze any data, let alone fMRI data. There are lots of wrong ways, but even if you avoid making any mistakes, there are still various options as to which statistical methods to use, and which method you use depends on which assumptions you're making. XBAM rests of different assumptions from SPM.
Whether XBAM's assumptions are more appropriate than those of SPM is a difficult question. The people who wrote XBAM presumably think so, and they're very smart people. But so are the people who wrote SPM. The point is, it's a very complex issue, the mathematical details of which go far beyond the understanding of most fMRI users (myself included).
My worry about this paper is that the average Joe Neuroscientist will decide that, because XBAM produces more activation than SPM, it must be "better". The authors are careful not to say this, but for fMRI researchers working in the publish-or-perish world of modern science, and whose greatest fear is that they'll run an analysis and end up with no blobs at all, the temptation to think "the more blobs the merrier" is a powerful one.
Fusar-Poli, P., Bhattacharyya, S., Allen, P., Crippa, J., Borgwardt, S., Martin-Santos, R., Seal, M., O’Carroll, C., Atakan, Z., & Zuardi, A. (2010). Effect of image analysis software on neurofunctional activation during processing of emotional human faces Journal of Clinical Neuroscience DOI: 10.1016/j.jocn.2009.06.027