Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Real Data Are Messy

Neuroskeptic iconNeuroskepticBy NeuroskepticNovember 27, 2015 5:29 PM

Newsletter

Sign up for our email newsletter for the latest science news

fixing_science.png

Over at the sometimes i'm wrong blog, psychologist Michael Inzlicht tells A Tale of Two Papers. Inzlicht describes how, as associate editor at the Journal of Experimental Psychology: General, he rejected a certain manuscript. He did so despite the fact that the peer review reports had been very positive. The article reported 7 studies, all of which found nice, statistically significant evidence for the hypothesis in question. So why reject it? Because, to Inzlicht, it was just too good to be true. Real data just aren't that consistent, suggesting that the results had been made more consistent through p-hacking, selective reporting, or other biases. So he did not accept the paper, and told the authors his concerns. That was a radical move. Until a few years ago, at many psychology journals, you would struggle to publish a paper that wasn't a perfect series of uniformly positive results. Today there's a little more skepticism but this is the first case I know of where a paper has been rejected for being too good. So the manuscript was rejected, but the authors resubmitted it. The revised version included all of the studies they ran, with no p-hacking. Out of 18 hypothesis tests included in the new version, only two were statistically significant - that's 2/18 positive, down from 7/7 originally! However, a meta-analysis of all the studies still returned a significant positive result. Inzlicht accepted this second version and it was published in June. Inzlicht says "I am a huge fan of this second paper" and he calls it "it is a model of transparency and a template for the kinds of things we should be seeing more of in our top journals". Now, this is a great development in many ways. However I wonder what would have happened if the 'honest' version of the manuscript had not ended up with a significant overall result. Would the paper still have been accepted? I'm sure the answer is "yes" if the true picture was a nice clean negative result. The past few years has seen a lot of discussion on the importance of publishing negative studies. The Jounal of Experimental Psychology: General has a commendable record of publishing null findings. But what if the true 'warts-and-all' picture was just, well, a mess? What if the overall meta-analytic p-value was 0.08 - that is, not significant, but too close to significance to be convincingly negative? Inzlicht, I think, would have published that. With other journals and other editors I'm not so sure. Journals want neat, tidy results that fit into a memorable narrative. You can make a nice narrative out of neat positive results, and (usually) from neat null results, but inconclusive findings almost by definition preclude a clear story. Yet messy results are not the same thing as bad results. Statistically, sometimes p-values are going to end up as 0.08 or 0.049, even if the experiments and analysis were flawless. Journals should ask authors to be honest, and they shouldn't discriminate against authors who honestly report their messy data. The stigma of negative results is being broken, and that's great. Inconclusive results should be accepted too.

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 70%

    Subscribe

    Already a subscriber? Register or Log In