According to a new paper, a full half of neuroscience papers that try to do a (very simple) statistical comparison are getting it wrong: Erroneous analyses of interactions in neuroscience: a problem of significance.
Here's the problem. Suppose you want to know whether a certain 'treatment' has an affect on a certain variable. The treatment could be a drug, an environmental change, a genetic variant, whatever. The target population could be animals, humans, brain cells, or anything else.
So you give the treatment to some targets and give a control treatment to others. You measure the outcome variable. You use a t-test of significance to see whether the effect is large enough that it wouldn't have happened by chance. You find that it was significant.
That's fine. Then you try a different treatment, and it doesn't cause a significant effect against the control. Does that mean the first treatment was ...