In a letter to Nature, University of Miami psychologists Michael McCullough and David Kelly propose A trading scheme to reduce false results.
Neuroskeptic readers will know that concern over false-positive science is growing. Many solutions have been proposed, but McCullough and Kelly's is quite novel:
Cap-and-trade systems have proved useful in cutting pollutants such as sulphur dioxide, nitrogen oxides and lead additives in petrol. We suggest that they could also be applied to reduce pollution of the scientific literature with irreproducible results. [Currently], researchers do not have to face the cost of publishing their own unverifiable results (most of which could have been prevented). That cost is borne by the scientific community and the public — for instance, in subsequent research inspired by false positives, which can lead to badly designed policies. Cap-and-trade systems force excessive polluters to purchase permits. Initially, institutions could receive 5 free permits per 100 published results, reflecting the widely accepted ideal of a 5% false-positive production rate. It would then be necessary to buy extra permits from other institutions should they ‘emit’ significantly more false positives that this (irrespective of whether these were honest or deliberate errors). Institutions that successfully reduce false positives in their research output could then sell off their surplus permits to other institutions that have exceeded their allocation. This flexibility would create incentives for researchers to find innovative ways to reduce false positives.
On his blog, McCullough expands on this theme, explaining how it might all work in practice. I like this idea, not least because it puts the emphasis on institutions not individual researchers. It makes little sense to focus on whether an individual researcher has a high or low false positive rate, because the sample size (the number of results published per year) is too small. It would be unfair to favor someone with 1/5 false positives last year over someone with 2/5 - the latter person probably just got unlucky. But with institutions publishing 100s of results per year, you could draw much stronger conclusions. But as neat as it is, this cap-and-trade idea might take a long time to set up. In the interim, I wonder if simply auditing false positives - a necessary prerequisite for the cap-and-trade to work - would be nearly as good? Institutions would still be incentivized to reduce false positive rates, even without a formal quota system - just in terms of reputation. Being publicly known as an institution with a high false positive rate would be its own punishment. Conversely, clean institutions would reap the rewards of being known as clean. These would be intangible (at first) but sooner or later a good reputation would turn into success in concrete terms: attracting funding, collaborations, and recruits.
McCullough ME, & Kelly DL (2014). Reproducibility: A trading scheme to reduce false results. Nature, 508 (7496) PMID: 24740058