Register for an account


Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.


Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.


Better Journals... Worse Statistics?

Neuroskeptic iconNeuroskepticBy NeuroskepticFebruary 20, 2013 1:06 AM


Sign up for our email newsletter for the latest science news

Some of the world's leading scientific journals are worryingly lax in ensuring that their papers contain adequate statistical details.


So say Italian researchers Tressoldi and colleagues in a provocative paper just out: High Impact = High Statistical Standards? Not Necessarily So.

They considered all articles published in 2011, that concerned any kind of psychological or medical study on human subjects. Four elite journals (Science, Nature, NEJM and The Lancet) were pitted against three much less 'impactful' publications.

It turned out that the good journals were often not very good at requiring authors to report their results in an informative way. Science and Nature were especially bad offenders: in the great majority of papers, there was no indication of the effect size or confidence intervals. This means that authors were allowed to state that some effect was occurring (it was statistically significant), but they didn't have say how large the effect was. Leaving out such information is almost universally frowned upon, and you'd hope that the best journals would enforce good practice. The low-ranked journals were a mixed bunch, but were generally better than Science and Nature in this regard.

By contrast, the two medical journals - NEJM and Lancet - were very good on most measures. My guess is that this is largely because they publish a lot of clinical trials, and these are generally (nowadays) held to higher standards than other research. Either way, this shows that it can be done, Nature and Science.

Of course, just because the stats are poorly reported doesn't mean that they're bad. The science might be great even if the presentation is lacking, and one hopes that for the top-tier journals, it is. But not having details like effect sizes and other important things like power calculations makes it hard to evaluate the validity of the science.


Tressoldi PE, Giofré D, Sella F, & Cumming G (2013). High Impact = High Statistical Standards? Not Necessarily So. PLoS ONE, 8 (2) PMID: 23418533

    3 Free Articles Left

    Want it all? Get unlimited access when you subscribe.


    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 50%


    Already a subscriber? Register or Log In