Blinded Analysis For Better Science?

Neuroskeptic iconNeuroskepticBy NeuroskepticNov 3, 2015 7:40 PM

Newsletter

Sign up for our email newsletter for the latest science news
 

In an interesting Nature comment piece, Robert MacCoun and Saul Perlmutter say that "more fields should, like particle physics, adopt blind analysis to thwart bias": Blind analysis: Hide results to seek the truth As they put it,

Decades ago, physicists including Richard Feynman noticed something worrying. New estimates of basic physical constants were often closer to published values than would be expected given standard errors of measurement. They realized that researchers were more likely to 'confirm' past results than refute them — results that did not conform to their expectation were more often systematically discarded or revised. To minimize this problem, teams of particle physicists and cosmologists developed methods of blind analysis: temporarily and judiciously removing data labels and altering data values... Blind analysis ensures that all analytical decisions have been completed, and all programmes and procedures debugged, before relevant results are revealed to the experimenter. One investigator - or a computer program - methodically perturbs data values, data labels or both, often with several alternative versions of perturbation. The rest of the team then conducts as much analysis as possible 'in the dark'. Before unblinding, investigators should agree that they are sufficiently confident of their analysis to publish whatever the result turns out to be, without further rounds of debugging or rethinking.

As a procedure, blind analysis has much in common with preregistration. Both involve the creation of a "Chinese wall" that prevents knowledge of the results from affecting decisions about the analysis. Both require a hypothesis of interest to be framed in advance. Both are intended to prevent p-hacking and other conscious and unconscious biases. Blind analysis shares many of the same limitations as preregistration, too. MacCoun and Perlmutter discuss concerns such as "won't people just peek at the raw data?". This is analogous to a criticism commonly raised against preregistration, "won't people just preregister retrospectively?" Both methods ultimately rest on trust. MacCoun and Perlmutter discuss preregistation, but they say that blind analysis is better as it offers the advantage of flexibility: "preregistration requires that data-crunching plans are determined before analysis... but many analytical decisions (and computer programming bugs) cannot be anticipated." However, I think that preregistration and blind analysis could work together. Each brings important benefits. For instance, preregistration ensures that negative results don't just disappear unpublished. Indeed, with pre-peer review, it can help them get published, if a journal agrees to publish the paper on the strength of the methods, before the results (negative or otherwise) are collected. Blinded analysis, alone, doesn't achieve that. But blinded analysis could help to make preregistration more useful. A prespecified analysis plan could incorportate a blinded phase. So, rather than having to decide at the outset how (say) outliers will be treated, researchers could leave this question open, and then decide based on a blinded look at the final data. It would be the best of both worlds.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2023 Kalmbach Media Co.