This very blog forms a large part of a newly published study on research methods blogs in psychology. The paper has a spicy backstory.
Back in 2016, psychologist Susan Fiske caused much consternation with a draft article which branded certain (unnamed) bloggers as being “bullies” and “destructo-critics” who “destroy lives” through “methodological terrorism.”
Fiske’s post (which later appeared in a more moderate version) was seen as pushback against bloggers who criticized the robustness and validity of published psychology papers. According to Fiske, this criticism often spilled over into personal attacks on certain individuals. Much debate ensued.
Now, Fiske is the senior author of the new study, which was carried out to examine the content and impact of 41 blogs that have posted on psychology methods, and, in particular, to find out which individual researchers were being mentioned (presumably, criticized) by name.
The included blogs (listed in the supplementary material) were a fairly comprehensive list, as far as I can see. My blog has the second largest number of posts out of all the blogs included (1180), but this pales into comparison with Andrew Gelman‘s 7211, although that is a multi-author blog. All posts were downloaded and subjected to text mining analysis. Data was collected in April 2017.
The results about the bloggers’ ‘targets’ were fairly unsurprising to me. It turned out that, out of a list of 38 researchers who were nominated as potential targets, the most often mentioned name was Daryl Bem (of precognition fame), followed by Diederik Stapel (fraud), and then Brian Wansink and Jens Förster (data ‘abnormalities’.)
These results seem inconsistent with the idea that bloggers were especially targeting female researchers, which had been one of the bones of contention in the 2016 debate. As the paper says:
Equal numbers of men and women were nominated, but nominated men were mentioned in posts more often.
I would note though that many of the male names high on the list have been ‘officially’ found guilty, or resigned (Stapel, Wansink, Förster, Smeesters), while none of the women have to my knowledge (Fredrickson, Schnall, Cuddy). At best you could try to argue that bloggers unfairly target innocent women? I’m not sure that this kind of question can be answered with quantitative data, anyway.
I have to say that it’s to her credit that Fiske carried out this detailed analysis of blogs in the wake of the firestorm over her 2016 comments. She could easily have just decided to walk away from the whole topic but instead she decided to collect some real data. On the other hand, I agree with Hilda Bastian’s comments on the weaknesses of this paper in statistical terms:
In some ways, the study has more relevance to a debate about weaknesses in methods in psychological science than it does to science blogging. It’s a small, disparate English-language-biased sample of unknown representativeness, with loads of exploratory analyses run on it. (There were 41 blogs, with 11,539 posts, of which 73% came from 2 blogs.) Important questions about power are raised, but far too much is made of analyses by gender and career stage for such a small and biased sample. And they studied social media, but not Twitter.