A unique source for peeking behind the curtains at the inner workings of science is Ivan Oransky and Adam Marcus’s Retraction Watch. Each day they comb through journals and alert readers to some of the latest research papers that, for one reason or another, a journal has decided to correct or withdraw from publication. In yesterday's posts we learned, for example, that Misuse of data forces retraction of paper on sow’s milk and that Plagiarism leads to retraction of math paper. The reasons given by the journals can be comically opaque and Retraction Watch tries, with a sometimes sardonic touch, to get to the truth of the matter. The best journals take great care in the laborious process of receiving papers, sending them out for review, and eventually publishing those that make the cut. "And yet mistakes happen,” Marcus and Oransky noted in their introductory post.
Sometimes these slips are merely technical, requiring nothing more than an erratum notice calling attention to a backwards figure or an incorrect address for reprints. Less often but far more important are the times when the blunders require that an entire article be pulled. . . . Retractions are born of many mothers. Fraud is the most titillating reason, and mercifully the most rare, but when it happens the results can be devastating.
A falsehood that has already been cited by a hundred other researchers and incorporated into their thinking can never be cleanly excised. One of the most interesting cases Retraction Watch has been following is that of Bharat Aggarwal, a cancer researcher at M.D. Anderson who has been accused of manipulating images in papers about natural substances that can puportedly fight cancer. The story has also been covered by Todd Ackerman in the Houston Post:
The whistle-blowers allege Aggarwal manipulated his images -- adding or subtracting features, cropping, stretching, rotating, flipping horizontally or vertically -- to leave the impression the same ones represented different experimental conditions.
Ackerman has also been covering the controversies at Anderson over the new president, who I wrote about in my previous dispatch. Earlier this week a lawyer for Dr. Aggarwal sent a threatening letter to Retraction Watch demanding that the editors retract their posts about his case, and Marcus and Oransky have reported on that with the same dispassionate objectivity they bring to their other work. Last fall in the New York Times, Carl Zimmer wrote about a study concluding that the primary reason for a growing number of retractions is not simply error but misconduct, which includes scientific fraud. Bad as that sounds, this is hardly ever like fraud in the financial world. An exception might occur when an experiment involves, for example, a potentially lucrative medicine. Earlier this year an investigation by the University of Connecticut found that a researcher had faked data from experiments with resveratrol, a substance found in red wine that has been touted as a possible means of extending life. It was reported at the time that he was involved with a company that sold resveratrol supplements. We don't yet know where on the spectrum the Aggarwal case will lie. But most often, as science writer Maggie Koerth-Baker has pointed out, the reasons for fudging data are subtle:
It's about having spent years on a project and really, really, really not wanting to believe that time was wasted. . . . It's about convincing yourself that you can cheat a little, just this one time, because your particular circumstances are just.
You know in your heart that your hypothesis is right. You rush to stake your claim, even if the data doesn't quite cooperate.
Drawing by Alison Kent from The Ten Most Beautiful Experiments. In The Ten Most Beautiful Experiments, I wrote about allegations that Robert Millikan cherry-picked his data when he became the first to measure the charge of the electron. Before Millikan it was unclear whether electricity came in a continuous flow like water or was parceled out in precise units, like pocket change. It is a difficult experiment, and I describe how I tried it at home with an old Millikan apparatus from eBay and a 10,000-volt power supply. The idea is to use a perfume atomizer to spray a cloud of oil droplets into a gap between two brass plates and then apply a high voltage, manipulating the dial until a droplet is suspended in mid air. Then you let go, timing its fall with a stopwatch. After you have followed a dozen or so, you plug the numbers into some equations and calculate the fundamental unit of charge. It’s scary work and I failed miserably. If science had depended on me, electrons would come in all shapes and sizes. Or there would be no electrons.
These things sound so easy in the physics books. You don't hear about the brass plates shorting out and sparking because a metal clip slipped into the wrong position. . . . I'd confuse one drop with another or with a floater in my eye. . . . Sometimes a drop would be so heavy that it sank like a stone, or carry so much charge that when I turned on the voltage it rocketed out of sight. I tried and failed too many times before I realized: for me to master so delicate an experiment would be like learning to play the violin or at least make good cabinetry.
Maestro Millikan, I called him. Here are some observations from his notebooks:
Very low something wrong . . . not sure of distance . . . Possibly a double drop . . . Beauty Publish . . . Good one for very small one . . . Exactly Right . . . Something the matter . . . Will not work out . . . Publish this Beautiful one. . . . Perfect Publish . . . Best one yet. . . .
Years after Millikan’s death, entries like these led to suspicions that he had cooked the books. I found myself siding with his defenders.
This is not an accusation that rings true to someone who has struggled with the oil-drop experiment. Millikan, I suspect, had simply developed a feeling for the mechanism, a sixth sense for when something had gone wrong: a slip of the thumb on the stop watch, a sudden fluctuation in temperature or plate voltage, a dust particle masquerading as an oil drop. He knew when he had a bad run. More interesting than the unfounded allegations is the question of how you keep from confusing your instincts with your suppositions, unconsciously nudging the apparatus, like a Ouija board, to come up with the hoped-for reply. It's something every experimenter must struggle with. The most temperamental piece of laboratory equipment will always be the human brain.
The charge, Millikan concluded, was 1.5924 x 10^-19 coulombs. (One coulomb is about what flows each second through a 100-watt bulb.) A century later the accepted value is 1.6022 x 10^-19. Note: The second part of this post was adapted from one I wrote last year in my blog The Cancer Chronicles.