Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Raw Data: How Widespread is the Problem of Irreproducible Results?

Fire in the MindBy George JohnsonJanuary 30, 2014 3:17 AM

Newsletter

Sign up for our email newsletter for the latest science news

Last week in the first installment of “Raw Data,” my new monthly column for the New York Times, I reflected on what has become known in science as the problem of irreproducible results. The fear that the corpus of scientific knowledge is becoming polluted with questionable findings -- experiments that cannot be replicated by other laboratories -- has become so great that the journal Naturehas promised to implement new measures “improving the consistency and quality of reporting in life-sciences articles” and has compiled an eye-opening archive called “Challenges in Reproducible Research.” The concerns are arising not just in epidemiological studies -- where some effect (a drug, food, behavior, or an environmental contaminant) is correlated positively or negatively with human health -- but also in bench research. This is the science of petri dishes and chemical reagents, with subjects ranging in complexity from human cells to genetically altered mice. For me the most jarring revelation was the failure of a team of scientists to replicate the results of 47 of 53 “landmark” experiments in the biology of cancer. I couldn’t help wondering whether any of these studies are ones that I cite in my own book, The Cancer Chronicles. Because of confidentiality agreements, the information is being kept under wraps. What I concentrated on in my Times column was not scientific fraud, which I believe is very rare, but the ease with which we humans -- master detectors of patterns -- can fool ourselves into seeing regularities that aren’t really there. Pictures in the clouds. With all good intentions, these findings may sometimes enter into the body of scientific literature to be cited and cited without ever being verified. As I've written here before, it can be maddeningly difficult to distinguish between what we see and what we think we see. That is a phenomenon that has fascinated me since I wrote my book Fire in the Mind, and I touched on it again in the inaugural post for this blog, "Lighting the Match." The largest share of published research -- about 50 percent -- involves the life sciences, and that field has become the focus of the recent controversy over reproducibility. But what about the equally important work going on in physics, chemistry, geology, and other so-called physical sciences? My colleague, Faye Flam, raises that question in a tough critique of my column on the Knight Science Journalism Tracker. As far as I have been able to discover, research in these fields has not been subjected to the same kind of scrutiny that John Ioannidis and Glen Begley (sources for my Times column) have brought to bear on the biomedical world. There have been no claims that dozens of landmark findings in, say, high-energy physics or nuclear chemistry have failed replication. One reason may be that higher statistical standards are often used to analyze data, particularly in particle accelerator research. And highly publicized discoveries like the confirmation of the top quark and the recent discovery of the Higgs boson include built-in replication -- two detectors run by different teams of scientists converging on similar results. Flam suggests another reason that the physics literature may be comparatively sound:

[Physicists] say that they can’t just make up any old hypothesis because they are tightly constrained by quantum mechanics and general relativity. And they’re constrained by umpteen measurements of the way atoms and particles and light behave in the real world. So they can’t get away with just dreaming up long-shot hypotheses without violating some aspect of reality as it’s been measured. Another consideration is the fact that human bodies are all different, while all electrons, protons and Higgs Bosons are the same, so naturally you won’t get the same kind of “truth” asking about the mass of the electron as you get asking about the risks and benefits of a daily aspirin.

Those are good points when applied to the differences between particle physics and epidemiology. But that is just a small part of biomedical science. What about the delicate and exquisitely controlled experiments that occur in laboratories? Are hypotheses involving intracellular enzyme pathways and the effects of microRNA on protein regulation so much less constrained than, say, solid-state physics and materials science? In the current issue of Science, Robert Service describes a feud over whether a researcher in Switzerland has or has not created exotic entities consisting of organically striped gold nanoparticles. In any kind of laboratory science there is a danger, as I wrote in the Times of "unknowingly smuggling one’s expectations into the results, like a message coaxed from a Ouija board." These are the kinds of matters I will be exploring in “Raw Data” and in a final post or two for this blog. Comments and corrections are welcome by email. For public discussion please use Twitter. For a glimpse of my new book, The Cancer Chronicles, please see this website. @byGeorgeJohnson

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 70%

    Subscribe

    Already a subscriber? Register or Log In