I’ve long been writing about problems in how science is communicated and published. One of the most well-known concerns in this context is publication bias — the tendency for results that confirm a hypothesis to get published more easily than those that don’t.
Publication bias has many contributing factors, but the peer review process is widely seen as a crucial driver. Peer reviewers, it is widely believed, tend to look more favorably on “positive” (i.e., statistically significant) results.
But is the reviewer preference for positive results really true? A recently published study suggests that the effect does exist, but that it’s not a huge effect.
Researchers Malte Elson, Markus Huff and Sonja Utz carried out a clever experiment to determine the impact of statistical significance on peer review evaluations. The authors were the organizers of a 2015 conference to which researchers submitted abstracts that were subject to peer review.
The ...