Gregor Mendel is one of the most important figures in science. Taught in school classrooms across the globe, his discoveries in the field of genetics have been unrivaled in their sheer significance and historical precedence. Mendel demonstrated, with rigorous experimentation and breeding of pea plants, the exact functions of the dominant and recessive genes shaping our lives.
Dominance and recessiveness refers to how inherited alleles, or different copies of individual genes, relate to each other. A dominant allele only requires one instance in your genetic makeup to express in your traits, such as eye color. Whereas a recessive allele requires two instances of it to be inherited together. These are best demonstrated, and most commonly recognized, through the Punnett square diagram.
However, there has been an ongoing historical debate as to whether Mendel’s original experiments on this matter were either fraudulent or biased in some way. His overall conclusions are well accepted in the scientific community, but his initial data itself may not be entirely accurate. New research in the past several years revived this topic, which has been the subject of multiple books over the past century.
“Mendel’s data are improbably close to what his theory predicted,” says Gregory Radick, a science historian at the University of Leeds. “But the idea that Mendel just made them up, out of thin air, is preposterous.” The more likely explanation is that some unconscious bias played a role in how he judged his results.
The Seeds of Controversy
This notion that Mendel fabricated his data goes back to the year 1900, when biologist W. F. R. Weldon first read Mendel’s seminal paper with some skepticism. Working with famous mathematician Karl Pearson, Weldon demonstrated that it was extremely unlikely for Mendel to get his results the way he did.
It wasn’t until geneticist Ronald Fisher came onto the scene in 1936 that allegations spread more widely about Mendel’s work being fraudulent. Fisher suggested that Mendel’s work was likely fixed in some way, but he posited that rather than Mendel himself being the culprit, some unknown assistant may have fabricated the results to appease Mendel. However, Fisher lacked any evidence for the claim that this assistant altered results.
“To Fisher’s credit, he was up front about having no independent evidence for it, and as far as I know, none has emerged since,” says Radick.
The controversy then continued on for decades after that, with some scholars claiming to have demonstrated Mendel was innocent, and others claiming to have proven Mendel was a scientific fraud. A 2008 book entitled Ending the Mendel-Fisher Controversy even aimed to settle once and for all the debate as a whole. However, this book did not end the discussion, given that many papers published in the years since have offered new insights and perspectives to the matter.
The discussion generally is not about whether the fundamental ideas behind Mendel’s work were right. Fisher himself supported the legitimacy of Mendel’s hypotheses, and Weldon viewed his issues as more academic in nature. Rather, the controversy questioned Mendel’s raw data, and if he was biased in his interpretations of it.
Released in 2016, the paper “Are Mendel’s Data Reliable? The Perspective of a Pea Geneticist” by geneticist Norman F. Weeden re-examined Mendel’s data from the point of view of a pea geneticist. The work applied modern findings and knowledge to Mendel’s work.
“I do not think Mendel ‘fabricated’ his data” says Weeden. “[However], there is some evidence that unconscious bias, or the enthusiasm of a more than helpful assistant, did influence the segregation values reported by Mendel.”
The study worked as both a historical overview of the literature, as well as a means of providing answers to some of the more pressing questions brought up by scientists over the past few decades. Weeden worked through these layers, and, considering what we know about Mendel, he found no reason to believe that Mendel deliberately fabricated his results, nor that an assistant altered anything. Rather, he may have unconsciously and unintentionally displayed bias in how he interpreted and presented his data.
This new study also suggested that Mendel may have simplified his data to be presentable for a skeptical audience. He may have done this himself, or with an assistant, to make his model easier for the public and peers to digest. Weeden also notes the different context and parameters for scientific research before the turn of the 20th century.
“I am not sure we can fault him for ignoring certain datasets or experimental complications that probably would have to be included in a paper written today,” Weeden says.
As additional context, the scientific community in Mendel’s day was highly skeptical of any new revolutions occurring in the field of genetics. Mendel was presenting his data to an audience that didn’t care to listen to his results, and so — in an effort to make his data more palatable — left out some key details and classified his data arbitrarily to support the model, according to Weeden.
“Mendel not only developed his model of particulate inheritance, but then rigorously tested the model in a way literally beyond the imagination of his peers,” Weeden says. “So, you might sympathize with [Mendel] when he started explaining his ideas to various erudite members of the scientific community and was met with blank, uncomprehending stares.”
In the end, we may never know with certainty the cause behind Mendel’s variable results. This debate has been ongoing for decades, and it’s unlikely to see an end anytime soon. As Weeden discusses, we can only hope that science remains self-correcting, and that evidence paves the way for a good future.