Some of the world's leading scientific journals are worryingly lax in ensuring that their papers contain adequate statistical details.
So say Italian researchers Tressoldi and colleagues in a provocative paper just out: High Impact = High Statistical Standards? Not Necessarily So.
They considered all articles published in 2011, that concerned any kind of psychological or medical study on human subjects. Four elite journals (Science, Nature, NEJM and The Lancet) were pitted against three much less 'impactful' publications.
It turned out that the good journals were often not very good at requiring authors to report their results in an informative way. Science and Nature were especially bad offenders: in the great majority of papers, there was no indication of the effect size or confidence intervals. This means that authors were allowed to state that some effect was occurring (it was statistically significant), but they didn't have say how large the effect was. Leaving out such information is almost universally frowned upon, and you'd hope that the best journals would enforce good practice. The low-ranked journals were a mixed bunch, but were generally better than Science and Nature in this regard.
By contrast, the two medical journals - NEJM and Lancet - were very good on most measures. My guess is that this is largely because they publish a lot of clinical trials, and these are generally (nowadays) held to higher standards than other research. Either way, this shows that it can be done, Nature and Science.
Of course, just because the stats are poorly reported doesn't mean that they're bad. The science might be great even if the presentation is lacking, and one hopes that for the top-tier journals, it is. But not having details like effect sizes and other important things like power calculations makes it hard to evaluate the validity of the science.
Tressoldi PE, Giofré D, Sella F, & Cumming G (2013). High Impact = High Statistical Standards? Not Necessarily So. PLoS ONE, 8 (2) PMID: 23418533