Register for an account


Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.


Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.


Debating Psychology's Replication Crisis

Should psychology researchers focus more on confirming old results and less on new discoveries?

By Jonathon KeatsAugust 25, 2016 8:00 PM
Thinking Statue - Flickr
(Credit: Brian Siewiorek/Flickr)


Sign up for our email newsletter for the latest science news

For more than 50 years, psychologists have worried about the robustness of research in their field. Many studies have never been replicated, meaning nobody knows what the results would be if they were repeated in another lab. Last year, psychologist Brian Nosek led a consortium of nearly 300 scientists who published the first attempt to estimate the reproducibility rate in psychology by redoing 98 recent studies.

The scientists couldn’t reproduce the initial results about 60 percent of the time, according to a paper published in Science, resulting in “reproducibility crisis” headlines across the globe. Colleagues have since questioned the validity of the group’s conclusions, however. In Science Smackdown, we let experts argue both sides.

Crisis of Confidence

University of Virginia social psychologist Brian Nosek and his collaborators replicated studies ranging from free will to memory, and they tried to be as faithful as possible to the initial experiments, even consulting with the original scientists.

Brian Nosek
Brian Nosek (Credit: Stephen Voss)

In some cases, the group’s findings challenge what researchers thought they knew. However, Nosek insists we shouldn’t just reject the original results. Interpretation of a failed replication is more complicated than that. “Is it because the original study was wrong, because the replication was screwed up or because of an important difference between the original and the replication?” he says. “All three are possible.”

Finding answers depends on replication studies becoming more common, requiring researchers, funders and journals to place less emphasis on new discoveries. The study has generated fresh interest in reproducibility, which he considers more important than the bleak statistics. “I think we’ve proven that replication is not boring,” he says.

Don't Worry, Be Happy

Harvard psychologist Daniel Gilbert believes Nosek and his colleagues proved nothing, and his recent rebuttal in Science challenges both their methodology and results. “The paper provides no evidence whatsoever for a replication crisis,” he says. “They drew an unrepresentative sample of studies, failed to ensure that replications were faithful and then misanalyzed their own data.”

Daniel Gilbert
Daniel Gilbert (Credit: Kris Krug)

Gilbert believes these flaws are almost inevitable, since faithful replication of a representative study sample would be prohibitively expensive and time-consuming. The replication consortium’s ill-gotten statistics have severely damaged psychology’s reputation, Gilbert contends. Moreover, he questions the underlying motivation for conducting such meta-studies. “What would we do with a specific ‘replicability number’ anyway?” he asks. “While academics wring their hands about a replication crisis, business, government, law and medicine are putting psychology’s discoveries to work to improve the human condition.”

[This article originally appeared in print as "The Replication Crisis."]

3 Free Articles Left

Want it all? Get unlimited access when you subscribe.


Already a subscriber? Register or Log In

Want unlimited access?

Subscribe today and save 50%


Already a subscriber? Register or Log In