Meta-analyses are systematic syntheses of scientific evidence, most commonly randomized controlled clinical trials. A meta-analysis combines the results of multiple studies and can lead to new insights and more reliable results. However, according to Italian surgeon Giovanni Tebala
writing in Medical Hypotheses
, meta-analyses are becoming too popular, and are in danger of taking over the medical literature. Searching the PubMed database, Tebala shows that the yearly rate of publication of new meta-analyses and systematic reviews (the green line on the graph below) is growing exponentially: there are now over 10,000 meta-analyses published every year, which is double the number that appeared just five years ago.
The number of primary clinical trials (red line) is growing steadily - there were over 30,000 published last year - but, relatively speaking, the number of meta-analyses is growing much faster. So the ratio of trials to meta-analyses (purple dotted line) is falling - it's now 3-to-1. Why is this? Tebala suggests that researchers are turning to meta-analyses to bolster their CVs:
Randomized controlled trials require hard work and financial commitment, whereas meta-analyses and systematic reviews can be relatively easy to perform and often get published in high impact journals. Many researchers might decide to devote themselves to the latter approach.
This is a bad thing, Tebala implies:
If we are unable to invert this trend, in the future we will have a growing number of synthetic studies utilizing someone else’s original data and fewer raw data to base our knowledge upon.
Tebala doesn't actually spell out why the glut of meta-analyses is a problem for science; he seems to be concerned at the unfairness of meta-analysts getting credit for their work with "someone else's data". Nonetheless I think there is a scientific problem, which is that as the ratio of meta-analyses to actual data increases, the scientific literature becomes dominated by interpretation and analysis resting on a limited amount of evidence. Put simply, the risk is that science will get 'top heavy'. There's something called the 'pyramid of evidence', with meta-analyses at the top, and primary trials further down. Here's one depiction (from here)
Update 11th August 2015: On Twitter, Hilda Bastian (@hildabast) pointed out a serious flaw in Tebala's methodology. Tebala counted RCTs by searching for "randomized controlled trial" and counted synthetic studies by searching for "meta-analysis OR systematic review". The problem is that many meta-analyses come from outside the world of clinical trials: there are meta-analyses of many things, such as genetic associations, or the evidence for theories in psychology. As such, the 3-to-1 ratio between RCTs and meta-analyses is inflated, because it includes meta-analyses that have nothing to do with RCTs. To improve on Tebala's approach, I ran some searches of my own, adding the word "placebo" to Tebala's search terms in order to exclude papers not dealing with placebo-controlled studies. This reveals that in 2013, the last year Tebala studied, there were 5477 hits for "randomized controlled trial" AND "placebo" while there were 1312 for ("meta-analysis" OR "systematic review") AND "placebo" in 2013. In other words the RCT to synthetic paper ratio is over 4-to-1, not less than 3-to-1. Not quite as bad as Tebala reported. The trend of exponential growth in meta-analyses is also less evident:
Tebala GD (2015). What is the future of biomedical research? Medical Hypotheses PMID: 26194725