As most of you probably already know, Weisberg et. al. set out to test whether adding an impressive-sounding, but completely irrelevant, sentence about neuroscience to explanations for common aspects of human behaviour made people more likely to accept those explanations as good ones. As they noted in their Introduction:
Although it is hardly mysterious that members of the public should find psychological research fascinating, this fascination seems particularly acute for findings that were obtained using a neuropsychological measure. Indeed, one can hardly open a newspaper’s science section without seeing a report on a neuroscience discovery or on a new application of neuroscience findings to economics, politics, or law. Research on nonneural cognitive psychology does not seem to pique the public’s interest in the same way, even though the two fields are concerned with similar questions.
They found that the pointless neuroscience made people rate bad psychological "explanations" as being better. The bad psychological explanations were simply descriptions of the phenomena in need of explanation (something like "People like dogs because they have a preference for domestic canines"). Without the neuroscience, people could tell that the bad explanations were bad, compared to other, good explanations. The neuroscience blinded them to this. This confusion was equally present in "normal" volunteers and in cognitive neuroscience students, although cognitive neuroscience experts (PhDs and professors) seemed to be immune.
But is this really true?
This kind of research - which claims to provide hard, scientific evidence for the existence of a commonly believed in psychological phenomenon, usually some annoyingly irrational human quirk - is dangerous; it should always be read with extra care. The danger is that the results can seem so obviously true ("Well of course!") and so important ("How many times have I complained about this?") that the methodological strengths and weaknesses of the study go unnoticed. People see a peer-reviewed paper which seemingly confirms the existence of one of their pet peeves, and they believe it - becoming even more peeved in the process.(*)
In this case, the peeve is obvious: the popular media certainly seem to inordinately keen on neuroimaging studies, and often seem to throw in pictures of brain scans and references to brain regions just to make their story seem more exciting. The number of people who confuse neural localization with explanation is depressing. Those not involved in cognitive neuroscience must find this rather frustrating. Even neuroimagers roll their eyes at it (although some may be secretly glad of it!)
So Weisberg et al. struck a chord with most readers, including most of the potentially skeptical ones - which is exactly why it needs to be read very carefully critiqued. Personally, having done so, I think that it's an excellent paper, but the data presented only allow fairly modest conclusions to be drawn, so far. The authors have not shown that neuroscience, specifically, is seductive or alluring.
Most fundamentally, the explanations including the dodgy neuroscience differed from the non-neurosciencey explanations in more than just neuroscience. Most obviously, they were longer, which may have made them seem "better" to the untrained, or bored, eye; indeed the authors themselves cite a paper, Kikas (2003), in which the length of explanations altered how people perceived them. Secondly, the explanations with added neuroscience were more "complex" - they included two separate "explanations", a psychological one and a neuroscience one. This complexity, rather than the presence of neuroscience per se, might have contributed to their impressiveness.
Perhaps the authors should have used three conditions - psychology, "double psychology" (with additional psychological explanations or technical terminology), and neuroscience (with additional neuroscience). As it stands, the authors have strictly shown is that longer, more jargon-filled explanations are rated as better - which is an interesting finding, but is not necessarily specific to neuroscience.
In their discussion (and to their credit) the authors fully acknowledge these points (emphasis mine)
Other kinds of information besides neuroscience could have similar effects. We focused the current experiments on neuroscience because it provides a particularly fertile testing ground, due to its current stature both in psychological research and in the popular press. However, we believe that our results are not necessarily limited to neuroscience or even to psychology. Rather, people may be responding to some more general property of the neuroscience information that encouraged them to find the explanations in the With Neuroscience condition more satisfying.
But this is rather a large caveat. If all the authors have shown is that people can be "Blinded with Science" (yes...like the song) in a non-specific manner, that has little to do with neuroscience. The authors go on to discuss various interesting, and plausible, theories about what might make seemingly "scientific" explanations seductive, and why neuroscience might be especially prone to this - but they are, as they acknowledge, just speculations. At this stage, we don't know, and we don't know how important this effect is in the real world, when people are reading newspapers and looking at pictures of brain scans.
Secondly, the group differences - between the "normal people", the neuroscience students, and the neuroscience experts - are hard to interpret. There were 81 normal people, mean age 20, but we don't know who they were or how they were recruited - were they students, internet users, the authors' friends? (10 of them didn't give their age and for 2 gender was "unreported" -?) We don't know whether their level of education, their interests, or values were different from the cognitive neuroscience students in the second group (mean age 20), who may likewise have been different in terms of education, intelligence and beliefs from the expert neuroscientists in the third group (mean age 27). Maybe such personal factors, rather than neuroscience knowledge, explained the group similarities and differences?
Finally, the effects seen in this paper were, on the face of it, small - people rated the explanations on a 7 point scale from -3 (bad) to +3 (excellent), but the mean scores were all between -1 and +1. The dodgy neuroscience added about 1 point on a 7 point scale of satisfactoriness. Is that "a lot" or "a little"? It's impossible to say.
All of that said - this is still a great paper, and the point of this post is not to criticize or "debunk" Weisberg et. al.'s excellent work. If you haven't read their paper, you should read it, in full, right now, and I'm looking forward to further stuff from the same group. What I'm trying to do is to warn against another kind of seductive allure, probably the oldest and most dangerous of all - the allure of that which confirms what we already thought we knew.
(*)Or do they? Or is this just one of my pet peeves? Maybe I need to do an experiment about the allure of psychology papers confirming the allure of psychologist's pet peeves...
Deena Skolnick Weisberg, Frank C. Keil, Joshua Goodstein, Elizabeth Rawson, Jeremy R. Gray (2008). The Seductive Allure of Neuroscience Explanations Journal of Cognitive Neuroscience, 20 (3), 470-477 DOI: 10.1162/jocn.2008.20040