In all the fuss over the pressure for scientists to publish positive results, we may have been missing an equally dangerous kind of publication bias operating in the opposite direction.
So say Luijendijk and Koolman in the Journal of Clinical Epidemiology: The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle.
The background here is the possible link between beta blockers and depression. Beta blockers are drugs widely used to treat high blood pressure. Some studies have reported that they raise the risk of depression, though many others found no link. Propranolol is said by some to be the worst offender because it's best at entering the brain.
Luijendijk and Koolman say that beta blocker-depression studies have appeared in the form of "publication cycles" - first a positive study appears, and then negative ones follow. Then another study finds a positive link using a different method - and rebuttals, using those methods, soon appear. They sketch out several such positive-negative cycles based on different methods and particular hypotheses.
Now, there's two ways to look at this. You could explain this in terms of standard positive publication bias. Maybe lots of people looked into a possible link, the ones who found nothing didn't publish. Then someone, by chance, did find an association with depression, and they published it. Once that happens, the question became a hot topic so the unpublished negative studies were dusted off and submitted.
But there's a more worrying possibility. What if the original positive studies were correct, and the subsequent negative studies were the product of an inverse publication bias in favor of contrarian negative results?
The publication cycles in the literature about beta-blockers and depression seem to suggest thatthe very publication of positive studies, whether true or false, increases the incentive to publish negative results, whether true or false... [in the case in question] the ?rst as well as a signi?cant number of subsequent negative studies were published in high-impact journals (8 of 19 journals with 2009 impact factor greater than 4.0). Third, power analysis showed thatd in two cycles, the ?rst negative studies were underpowered...
If a true-positive study stimulated the publication of one or more false-negative studies, again an invalid picture of the true association would emerge. Publication of false-negative studies may thus give rise to publication bias, just like publication of false-positive studies. Research groups usually compete to get the ?rst positive study published in a high-impact journal. It has been suggested that it could also be worthwhile to aim at getting the ?rst study that challenges the former published.This is not an entirely new idea. It was described in the classic Why Most Published Research Findings Are False, but only in passing.
To be honest it's impossible to know, in any particular case, whether inverse publication bias is at work. Depending upon whether you think beta blockers cause depression (and that's still controversial), your interpretation of the biases in the literature will probably differ.
However, I think the basic idea is important. Publication bias isn't a bias in favor of positive results per se. It's a bias towards "interesting" results - which in most cases means positive ones, but could equally well include negative ones, in certain contexts. In some ways, this could be a good thing, if the negative and positive biases eventually cancelled out, leaving a neutral playing field; but there's no guarantee that would ever happen.
Luijendijk, H., and Koolman, X. (2012). The incentive to publish negative studies: how beta-blockers and depression got stuck in the publication cycle Journal of Clinical Epidemiology DOI: 10.1016/j.jclinepi.2011.06.022