Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Psychology Should Aim For 100% Reproducibility

Neuroskeptic iconNeuroskepticBy NeuroskepticSeptember 7, 2015 2:58 PM

Newsletter

Sign up for our email newsletter for the latest science news

Last week, the Open Science Collaboration reported that only 36% of a sample of 100 claims from published psychology studies were succesfully replicated: Estimating the reproducibility of psychological science. A reproducibility rate of 36% seems bad. But what would be a good value? Is it realistic to expect all studies to replicate? If not, where should we set the bar? In this post I'll argue that it should be 100%.

fixing_science.png

First off however, I'll note that no single replication attempt will ever have a 100% chance of success. A real effect might always, just by chance, not be statistically significant, although with enough statistical power (i.e. by collecting enough data) this chance can be made very low. Therefore, when I say we should aim for "100% reproducibility", I don't mean that 100% of replications should succeed, but rather that the rate of successful replications should be 100% of the statistical power. In the Open Science Collaboration's study, for example, the average power of the 100 replication studies was 0.92. So 100% reproducibility would mean 92 positive results. Is this a realistic goal? Some people argue that if psychologists were only studying highly replicable effects, they would be studying trivial ones, because interesting psychological phenomena are more subtle. As one commenter put it,

Alan Kraut, executive director of the Association for Psychological Science and a board member of the Center for Open Science, noted that even statistically significant "real findings" would "not be expected to replicate over and over again… The only finding that will replicate 100 per cent of the time is likely to be trite, boring, and probably already known."

I don't buy this. It may be true that, in psychology, most of the large effects are trivial, but this doesn't mean that the small, interesting effects are not replicable. 100% reproducibility, limited only by statistical power, is a valid goal even for small effects. Another view is that interesting effects in psychology are variable or context-dependent. As Lisa Feldman Barrett put it, if two seemingly-identical experiments report different results, one confirming a phenomenon and the other not,

Does this mean that the phenomenon in question is necessarily illusory? Absolutely not. If the studies were well designed and executed, it is more likely that the phenomenon... is true only under certain conditions.

Now, my problem with this view is that it makes scientific claims essentially unfalsifiable. Faced with a null result, we could always find some contextual variable, however trivial, to 'explain' the lack of an effect post hoc.

It's certainly true that many (perhaps all!) interesting phenomena in psychology are context-dependent. But this doesn't imply that they're not reproducible. Reproducibility and generalizability are two different things.

I would like to see a world in which psychologists (and all scientists) don't just report the existence of effects, but also characterise the context or contexts in which they are reliably seen.

It shouldn't be enough to say "Phenomenon X happens sometimes, but don't be surprised if it doesn't happen in any given case." Defining when an effect is seen should be part and parcel of researching and reporting it. Under those defined conditions, we should expect effects to be reproducible.

rb2_large_white.png

Open Science Collaboration (2015). Estimating the reproducibility of psychological science. Science, 349 (6251) PMID: 26315443

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 70%

    Subscribe

    Already a subscriber? Register or Log In