Two or three times a week, while a life hangs in the balance, James Brophy makes a quick decision. Brophy is a cardiologist at Centre Hospitalier de Verdun, in suburban Montreal, which treats about 300 heart attack victims a year. As they arrive, Brophy orders roughly half of them-- the ones who made it to the hospital quickly enough--to be injected with one of two clot-busting drugs, streptokinase or tissue plasminogen activator (t-PA). All cardiologists agree that both drugs work well: more than 90 percent of all patients who receive either medication survive. Where they disagree is on the question of which of the drugs they should use. To be sure, thick reports convey the results of clinical trials designed to test the relative merits of the two drugs. But unfortunately the meaning of the data is confusing.
Like every other cardiologist--and, most certainly, like every patient--Brophy would like to know which drug is superior. And to that end, he’s waded through a pile of tricky statistics, skirted deep philosophical questions involving how we can know anything at all, and teamed up with Lawrence Joseph, a biostatistician at McGill University. Last year they published a controversial paper advising other doctors how to cut through the statistical fog. To make a rational choice, Brophy and Joseph declared, physicians of the late twentieth century should learn the mental techniques of an obscure eighteenth-century Englishman: the Reverend Thomas Bayes.
Despite his clerical title, the Reverend Thomas Bayes’ most enduring work is mathematical, not spiritual. In 1763 he proposed a procedure, known as Bayes’ theorem, for evaluating evidence. Early in this century, with the rise of modern statistics--a different set of procedures for evaluating evidence--Bayes’ theorem fell out of favor. Recently, however, some researchers have returned to Bayesian ideas.
Mathematicians, by and large, don’t find Bayesian procedures very exciting. The people who use them tend to be analysts working on practical problems that require someone to make a risky decision based on imperfect information: evaluating the health risks of radioactive pollutants, for example, even though precise exposure records may be lacking and the effects of low doses are not well understood; or estimating the reliability of backup diesel generators at nuclear power plants, though there have been very few real-life emergencies. One of the Big Three auto companies even paid a statistician good money to design Bayesian software that forecasts warranty claims for new-model cars, although no data yet exist on the long- term performance of those cars.
Bayesian procedures, in theory, are tailor-made for these kinds of messy problems, which often involve complex science, uncertain evidence, and quarreling experts--the sort of mess a cardiologist might face when choosing between streptokinase and t-PA. I’ve used these drugs, says Brophy, and participated in clinical trials. But his limited experience didn’t count for much, and two large trials, run in 1990 and 1993, one involving some 20,000 patients and the other almost 30,000, proved equivocal. Streptokinase did slightly better in one, t-PA in the other. Essentially, says Brophy, they found no big difference between the two drugs.