Stay Curious

SIGN UP FOR OUR WEEKLY NEWSLETTER AND UNLOCK ONE MORE ARTICLE FOR FREE.

Sign Up

VIEW OUR Privacy Policy


Discover Magazine Logo

WANT MORE? KEEP READING FOR AS LOW AS $1.99!

Subscribe

ALREADY A SUBSCRIBER?

FIND MY SUBSCRIPTION
Advertisement

Neuroscience Fails Stats 101?

Erroneous analyses of interactions plague neuroscience, leading to misinterpretations. Discover why statistical comparison matters.

Newsletter

Sign up for our email newsletter for the latest science news

Sign Up

According to a new paper, a full half of neuroscience papers that try to do a (very simple) statistical comparison are getting it wrong: Erroneous analyses of interactions in neuroscience: a problem of significance.

Here's the problem. Suppose you want to know whether a certain 'treatment' has an affect on a certain variable. The treatment could be a drug, an environmental change, a genetic variant, whatever. The target population could be animals, humans, brain cells, or anything else.

So you give the treatment to some targets and give a control treatment to others. You measure the outcome variable. You use a t-test of significance to see whether the effect is large enough that it wouldn't have happened by chance. You find that it was significant.

That's fine. Then you try a different treatment, and it doesn't cause a significant effect against the control. Does that mean the first treatment was ...

Stay Curious

JoinOur List

Sign up for our weekly science updates

View our Privacy Policy

SubscribeTo The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Subscribe
Advertisement

0 Free Articles