We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

The Problem With Medicine: We Don't Know If Most of It Works

Less than half the surgeries, drugs, and tests that doctors recommend have been proved effective.

By Jeanne Lenzer and Shannon Hall
Feb 11, 2011 12:00 AMJun 27, 2023 7:37 PM


Sign up for our email newsletter for the latest science news

Lana Keeton is accustomed to taking her licks standing up. She has worked as a steel broker and boxing promoter who rubbed elbows with Don King in the rough-and-tumble fighting world. She is also a kickboxer who doesn’t like to lose. But in 2001 a routine surgical procedure knocked her off her feet and led to the loss of her health, her business, and her dream home, a three-bedroom condominium in Miami Beach.

Keeton was in her thirties when she began suffering from intermittent bleeding and pain caused by fibroids, benign tumors in her uterus. She had the tumors surgically removed in 1983 and a second time in 1991, after they recurred. When her symptoms flared yet again in early 2001, her surgeon recommended a hysterectomy to get rid of the problem once and for all. During a discussion with her doctor about the upcoming surgery, Keeton mentioned that she occasionally leaked a little bit of urine when she coughed or sneezed. Nothing serious; she was still kickboxing without any problem. The surgeon told her that as long as he was “in there” doing a hysterectomy, he could fix her urinary problem by using some synthetic mesh as a sling to support her bladder. “He told me it was new and that I’d like it,” Keeton says. “I didn’t question it. I trusted him.”

The mesh was indeed new, but it was also relatively untested, as Keeton would eventually learn. Just 48 hours after her discharge following surgery, she was rushed to a nearby hospital, where doctors diagnosed a life-threatening infection called necrotizing fasciitis and told her she required emergency surgery to remove dead tissue. After a rocky 16 days in the hospital, Keeton was sent home, where she was bedridden for the next three months. A nurse came twice a day to dress the gaping wound in her belly, which had to be left open to control the infection as she healed. Unable to work, Keeton couldn’t keep up with her condo payments.

During the 16 subsequent surgeries and procedures Keeton would undergo, doctors discovered that the mesh had sliced its way through her bladder like a grater through cheese. Infections were forming on the mesh itself. Doctors worked to extract that mesh bit by bit, but it was so embedded in her internal tissues, they are still trying to remove every last piece today.

To understand why this was happening to her, Keeton went online. What she encountered left her dumbfounded: hundreds of patients talking about their problems with surgical mesh implants. Many told stories like hers, of recurrent pain, infections, and bleeding. Men whose hernias had been repaired with mesh were left incontinent and forced to wear adult diapers. Keeton was enraged. Here she was reading about serious, even life-threatening complications, yet her doctor either hadn’t known or hadn’t told her of any of the risks—risks she says “I would never have taken for such a minor inconvenience as urinary incontinence.”


In a recent poll conducted by the Campaign for Effective Patient Care, a nonprofit advocacy group based in California, 65 percent of the 800 California voters surveyed said they thought that most or nearly all of the health care they receive is based on scientific evidence. The reality would probably shock them. A panel of experts convened in 2007 by the prestigious Institute of Medicine estimated that “well below half” of the procedures doctors perform and the decisions they make about surgeries, drugs, and tests have been adequately investigated and shown to be effective. The rest are based on a combination of guesswork, theory, and tradition, with a strong dose of marketing by drug and device companies.

Doctors are often as much in the dark as their patients when they implant new devices (like the surgical mesh used on Keeton), perform surgery, or write prescriptions. The U.S. Food and Drug Administration (FDA) regulates drugs, devices, and many tests, but it does not control how doctors use them and has no control at all over surgeries. Lack of strong oversight means doctors often have limited information about side effects, even from products and procedures used for years. One surgeon who complained says, “Device makers could sell us a piece of curtain and call it surgical mesh and we wouldn’t know the difference.”

Of course, some treatments don’t have to be studied. Penicillin, for example, is an accepted drug for pneumonia. But a surprising number of treatments are later found to be useless or harmful when they are finally put to the test. Many widely adopted surgeries, devices, tests, and drugs also rest on surprisingly thin data. For instance, many doctors routinely prescribe a powerful blood thinner called warfarin to prevent a pulmonary embolism, a potentially deadly blood clot that blocks an artery in the lungs. Warfarin has been in use for decades. Yet when the Cochrane Collaboration, a highly regarded international consortium of medical experts, examined the evidence (pdf), they could find only two small (albeit randomized and controlled) studies supporting the use of warfarin for patients at risk of developing clots. Neither study proved that the risky blood thinner was superior to simply giving the patient ibuprofen.

Another widespread practice for more than 40 years is spinal fusion, a surgery for back pain that often involves implanting expensive devices known as pedicle screws. It can take weeks to recover from the surgery, and costs can run into the tens of thousands of dollars. Yet it is anybody’s guess whether any given patient will have less pain after surgery because nobody has conducted crucial studies to determine who needs spinal fusion and who would do better with less invasive treatment. Even the imaging tests that doctors use to make the case for back surgery, including MRI, X-rays, and CT scans, are not very good at pinpointing the cause of pain, comments Jerome Groopman, chief of experimental medicine at Beth Israel Deaconess Medical Center in Boston and author of How Doctors Think.

The holes in medical knowledge can have life-threatening implications, according to an Agency for Healthcare Research and Quality report published in 2001: More than 770,000 Americans are injured or die each year from drug complications, including unexpected side effects, some of which might have been avoided if somebody had conducted the proper research. Meaningless or inaccurate tests can lead to medical interventions that are unnecessary or harmful. And risky surgical techniques can be performed for years before studies are launched to test whether the surgery is actually effective.

“All too often a new procedure is developed, it is used widely, and then if doubts appear we might or might not do the research that’s needed,” says Carol Ashton, a physician at the Methodist Hospital Research Institute in Houston who studies surgical evidence. Complicating matters, many clinical guidelines are written by physicians and members of professional societies who have financial conflicts of interest with drug, device, or test kit companies.

A 2002 study in the Journal of the American Medical Association (JAMA) found that 87 percent of guideline authors received industry funding and 59 percent were paid by the manufacturer of a drug affected by the guidelines they wrote. Evidence of resulting conflicts continues to mount. A report published this year found that authors of medical journal articles favorable to the controversial diabetes drug Avandia (thought to increase the risk of heart attack) were three to six times as likely to have financial ties to the manufacturer as were the authors of articles that were neutral or unfavorable. Physician David Newman, director of clinical research at Mount Sinai Medical Center’s department of emergency medicine in New York City, says, “We’re flying blind too much of the time, and it’s hurting patients.”

Many policy experts believe we could substantially improve the quality of health care and reduce costs if only we would do more research to determine what works best in medicine, and for which patients. Giving patients care they don’t need and failing to give them care that is necessary account for an estimated 30 percent or more of the $2.4 trillion the nation spends annually on health care. “We don’t like to acknowledge the uncertainty of medicine, either to ourselves or to our patients,” says Michael Wilkes, a professor of medicine and vice dean of education at the University of California, Davis. “But patients deserve to know when their doctor’s recommendation is backed up with good evidence and when it isn’t.”


Nowhere in medicine is this more of a problem than in surgery. Even essential surgery may pose risk of infection, medical error, or a bad reaction to anesthesia. But risks are compounded because many common surgical techniques are not as effective as physicians believe or are simply performed on the wrong patients, says Guy Clifton, a neurosurgeon at the University of Texas Medical School at Houston and author of Flatlined: Resuscitating American Medicine.

Take the practice of cleaning out the carotid arteries, the large blood vessels that run up each side of the neck. Just like the coronary arteries, where heart attacks occur, the carotid arteries can become clogged with fatty tissue. If a clump of this tissue, called plaque, breaks free, it can travel into the brain and block a smaller blood vessel, causing a stroke. Several large clinical trials, involving thousands of asymptomatic patients in the United States and Europe, have shown that a surgical technique known as carotid endarterectomy can remove the plaques and slightly reduce the risk of a stroke, by about 1 to 5 percent over five years. But about 3 percent of the time, the surgery itself can trigger a stroke, heart attack, or even death, so it offers meaningful benefit only to people who are at the highest risk of having a stroke. That would be symptomatic patients, those with a serious blockage of a carotid artery and a history of at least one previous stroke. Nevertheless, neurologist Peter Rothwell, a researcher at Oxford University and specialist on stroke, has found that 80 percent of such procedures were performed on low-risk patients without symptoms—an inappropriate group.

Then there is the other half of the story. In 1989, as part of an effort to improve carotid surgery, vascular surgeons began employing a technique called stenting to prop open clogged carotid arteries with metal mesh tubes. Stenting is less invasive, but that does not necessarily mean it is safer. One study, conducted in France and published in 2006 in the New England Journal of Medicine, had to be stopped because stenting was killing patients. Another large study, out this year, found that 4.7 percent of endarterectomy patients had a stroke or died within four years after surgery, compared with 6.4 percent of those receiving stents. Rothwell is not optimistic that even this evidence will dampen surgeons’ enthusiasm for stents. “One issue is how these fashions arise in medicine—why do doctors accept a new technique and begin using it widely?” he says. “Innovation in medicine is not synonymous with progress.” Yet no country has set up a systematic program for evaluating new surgeries.


If surgery is the wild west of medicine, shouldn’t oversight by the FDA ensure that drugs, at least, are safe and effective? Not necessarily. All drugs must undergo a slew of tests before they are approved, but many studies the FDA oversees are poorly designed or too small to answer important questions, such as how often rare but potentially harmful or lethal side effects occur, and which patients are unlikely to be helped. And many drugs are not adequately monitored for safety problems after they reach the market.

“It’s impossible to guarantee that unexpected problems won’t crop up over time,” notes Jerome R. Hoffman, a professor of emergency medicine at the University of Southern California. “But the FDA makes matters worse by failing to adopt the precautionary principle ‘Let’s be fairly sure it’s safe before we use it’ in favor of ‘We’ll approve it unless you can prove that it’s dangerous.’”

The FDA currently relies on the drugmakers themselves, along with scattered reports from individual doctors, to identify problems once a drug is on the market. “The situation is hardly better with regard to effectiveness,” Hoffman continues. “The FDA requires only that a drug is by some measure better than nothing. Most new drugs are ‘me-too’s,’ and they don’t have to prove that they are an advance over older and cheaper drugs, including some long proven to be safe. These don’t have to be the terms under which the FDA operates. But as long as FDA’s primary mandate seems to be that it’s industry friendly, it is hard to see any of this changing.”

Although the FDA collects safety data on drugs, experts estimate that only a fraction of the potentially related harms and deaths—about 10 to 50 percent—end up in FDA databases, in part because reporting is voluntary. Also, what is reported is often so incomplete that there is no way to tell whether a drug or device is at fault. According to William Maisel, formerly of Harvard Medical School and now chief scientist at the FDA’s Center for Devices and Radiological Health, many trivial and unrelated events are thrown in along with serious incidents, “making it hard to find the signal amid all the noise.” To top things off, the FDA does not routinely analyze the reports for each drug or device, so serious side effects can be missed for years.

This is the issue that drew so much attention to Avandia, the diabetes drug made by pharma giant GlaxoSmithKline. In 2007 Steven Nissen, a prominent cardiologist from the Cleveland Clinic, and another researcher published an analysis of 42 studies, concluding that Avandia increases the risk of heart attack and death. This past February, the U.S. Senate Committee on Finance (which has jurisdiction over Medicare and Medicaid) released documents and other evidence suggesting that GlaxoSmithKline knew about possible cardiac side effects for several years before Nissen’s report. Rather than warn patients and government officials, company executives “attempted to intimidate independent physicians [and] focused on strategies to minimize or misrepresent findings that Avandia may increase cardiovascular risk,” according to the committee.

Mary E. Money, an internist in Hagerstown, Maryland, says she became alarmed in 1999 after several of her patients on Avandia developed symptoms of congestive heart failure. She and a colleague looked at the records for all of their patients on the drug and found an unexpectedly high percentage was experiencing symptoms of heart failure. In January 2000 Money contacted the manufacturer to alert it to the problem. The company eventually sent a letter to the chief of staff at the hospital where Money worked, telling him that she should not be permitted to talk about the problem since, it said, the issue of congestive heart failure was not proved to be an effect of the drug. Money says she felt “highly intimidated” by the letter and what she perceived as the implicit threat of a lawsuit. She had planned to publish her findings, but after the hospital received the letter, one of her intended coauthors, an epidemiologist, stopped responding to her e-mails, effectively killing publication.

A spokesperson for GlaxoSmithKline called Money’s theories “unsubstantiated.” Nonetheless, this July, the FDA suspended enrollment in Glaxo’s large clinical trial comparing the safety of Avandia with that of a competing diabetes drug and may halt the study altogether. Nissen argues that the drug should be taken off the market.

It is all too easy for physicians to ignore or miss evidence, particularly when drug or device companies use aggressive marketing to counter reports that could harm sales. In 2002 JAMA published results of a huge study, called the Antihypertensive and Lipid-Lowering Treatment to Prevent Heart Attack Trial, or ALLHAT, which looked at drugs used to lower blood pressure. The researchers concluded that inexpensive generic diuretic drugs were just as effective at controlling blood pressure and preventing heart attacks as were brand-name drugs. For some patients the diuretics were actually safer, with fewer side effects.

The study, which was funded by NIH’s National Heart, Lung, and Blood Institute, made headlines around the world. Given the strength of the results, its authors and the NIH believed it would encourage physicians to try diuretics first. Yet after eight years, the ALLHAT report has hardly made a dent in prescribing rates for name-brand blood pressure medications, according to Curt Furberg, a professor of public health sciences at Wake Forest University.

USC’s Hoffman says the scenario repeats itself time and again. “Some expensive new drug becomes a blockbuster best seller following extensive marketing, even though the best one might be able to say about it is that it seems statistically ‘non-inferior’ to an older, cheaper drug. At the same time, we don’t have any idea about its long-term side effects.”

After the publication of some ALLHAT results, Pfizer—one of the manufacturers of newer and more expensive antihypertensive drugs—commissioned a research company to survey doctors about their awareness of the results. When the company learned that doctors were generally in the dark about the study, Pfizer helped make sure they stayed that way. Two Pfizer employees were praised as “quite brilliant” for “sending their key physicians to sightsee” during Furberg’s ALLHAT presentation at the annual American College of Cardiology conference in California in 2000, according to e-mails entered into the public record after a citizen’s petition to the FDA. Pfizer sales reps were instructed to provide a copy of the study to doctors only if specifically asked. “The data from a publicly funded study may be good, but you don’t have anyone out there pushing that study data, versus thousands of people doing it for the drug companies,” says Kevin Brode, a former vice president at marketRx, a firm that provides strategic marketing information to the pharmaceutical industry.


Misleading marketing isn’t the only issue. in many cases, physicians perform surgeries, prescribe drugs, and give patients tests that are not backed by sound evidence because most doctors are not trained to analyze scientific data, says Michael Wilkes, vice dean of education at U.C. Davis. Medical students are required to memorize such a huge number of facts—from the anatomy and physiology of every structure in the human body to the fine details of thousands of tests, diagnoses, and treatments—that they generally do not have time to critique the information they must cram into their heads. “Most medical students don’t learn how to think critically,” Wilkes says.

That was not true for Mount Sinai’s David Newman. “I grew up questioning authority—and it got me kicked out of kindergarten,” he says with a laugh. In medical school, he was surprised that his questions were often met with answers that were rooted not in evidence but merely in the opinions and habits of senior physicians. Over the years as a practicing physician, he says he has come to believe that most of what physicians do daily “has no evidence base.”

This was the gist of a talk Newman delivered on a cool, gray day last fall to a packed lecture hall in the cavernous Boston Convention and Exhibition Center, where more than 5,000 emergency physicians from around the world gathered for the Scientific Assembly of the American College of Emergency Physicians. Much of what doctors know and do in medicine is flat-out wrong, Newman told his colleagues, and the numbers tell the truth.

Newman started his talk by explaining two concepts: the “number needed to treat,” or NNT, and the “number needed to harm,” or NNH. Both concepts are simple, but often doctors are taught only a third number: the relative decrease in symptoms that a given treatment can achieve. For example, when an ad for the anticholesterol drug Lipitor trumpets a one-third reduction in the risk of heart attack or stroke, that is a relative risk, devoid of meaning without context. Only by knowing how many patients have to be treated to achieve a given benefit—and how many will be harmed—can doctors determine whether they are doing their patients any good, Newman says. In the best-case scenario, 50 men at risk for a heart attack would have to be treated with statins like Lipitor for five years to prevent a single heart attack or stroke. Stated differently, 98 of 100 men treated for five years would receive no benefit from the drug, yet they would all be exposed to risk of its potentially serious and fatal side effects, such as muscle breakdown and kidney failure.

Another example cited by Newman: Doctors routinely give antibiotics to people with possible strep throat infections in order to prevent heart damage that can, in rare instances, develop if a strep infection leads to acute rheumatic fever. In practice, doctors prescribe an antibiotic to more than 70 percent of all adults with a sore throat, says the Centers for Disease Control and Prevention (CDC), even though almost all throat infections are caused by viruses, for which antibiotics are useless.

Are doctors keeping their patients safe by freely prescribing antibiotics, Newman asks, or are they doing more harm than good? To answer the question, he dug up statistics from the CDC and found that the NNT was 40,000: Doctors would have to treat 40,000 patients with strep throat to prevent a single instance of acute rheumatic fever. Then he looked up how many fatal and near-fatal allergic reactions are caused by antibiotics. The number needed to harm was only 5,000. In other words, in order to prevent a single case of rheumatic fever, eight patients would suffer a near-fatal or fatal allergic reaction.

Finding the hard statistics for antibiotics is relatively easy, but sometimes data are literally withheld. Lisa Bero at the University of California, San Francisco, found that clinical trials producing positive outcomes were nearly five times as likely to be published, as those with neutral or negative outcomes, allowing health care providers to come away with rosier views of a drug’s value than might be warranted. As Bero and her coauthors so drily put it, “The information that is readily available in the scientific literature to health care professionals is incomplete and potentially biased.”

Adding to the confusion caused by data suppression are instances of spin that cast marginally effective medicines as “miracle drugs” and transform risky devices into technological breakthroughs. Tom Jefferson knows about spin firsthand. Jefferson, a physician based in Rome and prominent member of the Cochrane Collaboration, was charged with reviewing studies of the antiflu drug oseltamivir, sold as Tamiflu, during the height of the avian flu scare in 2005. He and his team concluded that the drug was effective against complications of flu, like pneumonia, thus encouraging its use. But several years later, another physician challenged that conclusion because 8 of 10 studies in a meta-analysis—a review of studies—that Jefferson relied on had never been published.

Although Jefferson had trusted the unpublished study conclusions at the time, the challenge sent him on a hunt for the raw data in 2009. He was stymied when several study authors and the manufacturer gave one excuse after another for why they couldn’t supply the actual data. Jefferson’s concern turned to outrage when two employees of a communications company came forward with documents showing that they had been paid to ghostwrite some of the Tamiflu studies. They had been given explicit instructions to ensure that a key message was embedded in the articles: Flu is a threat, and Tamiflu is the answer.

After reanalyzing the raw data finally made available (they still don’t have it all), Jefferson and his colleagues published their review last December, saying that once the unpublished studies were excluded, there was no proof that Tamiflu reduced serious flu complications like pneumonia or death. Health officials around the world had assumed the drug was as effective as claimed and recommended Tamiflu for patients during the recent h1n1, or swine flu, pandemic. That pandemic turned out to be far milder than expected, and it is anybody’s guess whether better information about Tamiflu—or better drugs—will appear before a more serious flu outbreak hits. “We shouldn’t have taken anybody’s word for it. We took it on good faith. Never again,” Jefferson says today.


An essential part of the solution is better medical evidence based on independent research, and lots of it. Yet the NIH allocates less than 1 percent of its $30 billion annual budget to “comparative effectiveness research,” the kind needed to sort out the surgeries, drugs, and devices that work from those that do not. The rest goes toward more basic science aimed at finding new cures. The Agency for Healthcare Research and Quality, established in the late 1980s to fund such research, has a budget of less than $400 million a year. The quest for change is stymied by pharmaceutical company lobbying aimed at convincing the public that shutting some doors would amount to health care rationing, not better treatment or medical advance.

Very slowly, however, some things have started to change. The economic stimulus package of 2009 contained more than $1 billion in funding over two years for comparative effectiveness studies, and health care reform legislation signed by President Obama in March establishes the nonprofit Patient-Centered Outcomes Research Institute to set priorities and distribute the funds.

Beyond more and better medical evidence, a growing number of physicians want to enfranchise patients with a renewed emphasis on informed consent, ensuring that patients know what they are getting into when they agree to an elective test or surgery. “Patients let their doctors make a lot of important decisions for them. There’s a large body of research that says many would make different decisions, especially in surgery, if they understood the trade-offs and the lack of evidence involved,” says Jack Fowler, past president of the Boston-based Foundation for Informed Medical Decision Making.

Shared decision making might have saved Lana Keeton years of pain and disability. In October 2008, seven years after she had the surgical mesh implanted, the FDA issued a warning that the products of nine mesh manufacturers (including the one implanted in Keeton) were associated with serious complications, from bowel and bladder perforations to infections and pain. That year, Keeton founded Truth in Medicine, an organization devoted to ensuring that surgeons obtain genuine informed consent from patients before they implant devices. “If you go to a grocery store, they list all the ingredients in the products,” she says. “But surgeons don’t tell you what they’re putting in your body or what the complications are.”

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!


Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Our List

Sign up for our weekly science updates.

To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.