Psychiatry and Phrenology

Neuroskeptic iconNeuroskeptic
By Neuroskeptic
May 4, 2011 5:25 AMNov 5, 2019 12:15 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

The notorious John P. "Most Published Research Findings Are False" Ioannidis has turned his baleful statistical gaze upon the literature on brain volume abnormalities in psychiatric disorders.

Reports of regional volume differences in the brains of people with mental illness compared to healthy people have appeared in increasing numbers in recent years. Such studies have given plenty of positive results. People with depression have smaller hippocampi. The amygdala is bigger in people with autism. And so on.

Last month, Ioannidis took a comprehensive look at this literature and he argues that it suffers from a fairly serious case of "excess significance bias" - essentially, that scientists are somehow biased towards reporting differences between patients and controls, and are not telling people about the times when there wasn't a difference. This could be because of publication bias, p-value fishing or other scientific sins.

Scientists tend to call a difference between two groups significant if it has a p value of less than 0.05. This means that if there were no real difference, just some random noise, this result would be less than 5% likely to occur.

However, there's many ways you could end up with a low (i.e. good) p value. You would get a significant result, even if the true difference was very small, if you do a big enough study. Even a small difference will be detected if you study enough people. On the other hand, when the true difference is huge, you might only need a small study to get the same p value.

A power calculation is a way of specifying how likely a given study would be to detect a difference of a given size, based on the size of the study. These are usually used ahead of time to work out how big your upcoming study needs to be, assuming you can guess roughly how big the real effect you're interested in is going to be.

Ioannidis turned this on its head and asked: assuming that the true difference in the brain volume is what the average of all the published studies says it is, how many of the published studies were big enough that they ought to have succesfully detected it?

He found 41 seperate meta-analyses for different brain regions in various disorders. These were published in 8 papers - because each paper reported on multiple regions. He only looked at meta-analyses published in the past 4 years, but these analyses will themselves have included older work. This means that this paper is a kind of meta-meta-analysis. He didn't directly consider the raw brain scans at all.

The meta-analyses found many significant volume differences - but in 29 of those 41, there was an excess of significant papers. In other words, the papers were too small to have a good chance to detect the effect that they themselves found - suggesting that something funny was going on. Although, strangely, in 10/41 there were too few, and only in 2 were there the "right" number.

For what it's worth, studies on schizophrenia and on relatives-of-people-with-schizophrenia showed the least evidence of this problem, while autism was terrible, with 4 times as many significant papers as expected by chance. I'm not sure this is worth much, though. We don't know if this tells us more about schizophrenia vs autism, or more about the researchers that study them.

Anyway, this is an important study, and the inverse power calculation approach is certainly a useful one. It's not new, but it's not used as widely as it ought to be. It does make the assumption that the meta-analyses are "right" about the effect size, and then paradoxically concludes that they are biased. However, this means that the true bias is probably even bigger than this suggests (because if the analyses as biased, the true effect size is smaller than assumed, and the studies should have been even less likely to find it.)

Unfortunately, this doesn't tell us which of the studies are wrong, so it's not directly useful for people researching mental illness. It tells us that there is something wrong with scientific publishing, however. Truth be told, I suspect that a similar picture would emerge if you did this kind of thing in many other fields of science. The only real solution, in my book, would be to require the pre-registration of scientific studies. Ioannidis actually advocates this at the end of the paper.

Ioannidis JP (2011). Excess Significance Bias in the Literature on Brain Volume Abnormalities. Archives of general psychiatry PMID: 21464342

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.