Why Scientists Shouldn't Replicate Their Own Work

Neuroskeptic iconNeuroskeptic
By Neuroskeptic
Feb 26, 2017 2:15 AMNov 20, 2019 1:56 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

Last week, I wrote about a social psychology paper which was retracted after the data turned out to be fraudulent. The sole author on that paper, William Hart, blamed an unnamed graduate student for the misconduct. Now, more details have emerged about the case. On Tuesday, psychologist Rolf Zwaan blogged about how he was the one who first discovered a problem with Hart's data, in relation to a different paper. Back in 2015, Zwaan had co-authored a paper reporting a failure to replicate a 2011 study by Hart & Albarracín. During the peer review process, Hart and his colleagues were asked to write a commentary that would appear alongside the paper. Zwaan reports that Hart's team submitted a commentary which presented their own succesful replication of the finding in question. However, Zwaan was suspicious of this convenient "replication" and decided to take a look at the raw data. He noticed anomalies and, after some discussion, Hart's "replication" was removed from the commentary. When the commentary was eventually published, it contained no reference to the problematic replication.

Meanwhile, following an investigation, Hart's nameless student confessed to manipulating the data in the "replication" and also in other previous studies - Hart's retracted paper being one of them. There are a number of lessons we can take from this story but to me, it serves as a reminder that scientists should not be replicating their own work. Replication is a crucial part of science, but "auto-replications" put researchers under great pressure to find a certain result. For a career-minded scientist, to fail to replicate your own work is worse than never doing the replication at all. First, because replications are less sexy than original studies and usually end up in low ranking journals. But it gets worse - if you publish an effect and then later fail to replicate it, an observer (e.g. someone deciding whether to award you a grant, fellowship, or job) might conclude that you don't know what you're doing. In order to succeed, researchers today are expected to craft and project a "career narrative" in which all of their experiments and papers constitute a beautiful upward arc of progress. It's very difficult to fit a negative auto-replication into such a tidy and optimistic story. This is why "failed" studies, especially replications, tend to end up unpublished. Or, as in the Hart case, worse happens. Here's another way of looking at it: a replication attempt has much in common with peer review, in that they're both an evaluation of the validity of a scientific claim. Who would want scientists to peer review their own work? So I wonder if we should "discount" apparently succesful auto-replications: perhaps when performing a meta-analysis, we should include the largest study from each research group and ignore the others. I think we certainly shouldn't expect scientists to replicate their own work before they can publish it. Rather, we should encourage scientists to perform more independent replications of other peoples' studies.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.