You don't need new data to produce new science. A re-analysis or re-interpretation can be just as important and original as a new set of results. I say this because there's an interesting discussion going on over at PubPeer. In brief, British physicists Julian Stirling and colleagues have released a draft paper using reanalysis to criticize the idea of 'striped' nanoparticles. Nanoparticles are tiny bits of a material, such as gold. They can be coated in various chemicals (ligands), which has important biological and medical applications. It has been suggested that some mixtures of ligands form regular stripes on the surface of gold nanoparticles, and that these stripes can be seen with an AFM scanning tunnelling microscope (STM).
Stirling et al argue that these stripes are nothing more than artefacts caused by technical limitations in the STM; their argument is that the stripes are no more real than, say, the dots, blotches and shadows that appear when images are converted to JPEG format. According to Stirling et al these artefacts have given rise to 23 peer-reviewed publications, many in top journals. For more on the background, see here and here.
Anyway, Stirling et al's new paper is based largely on a reanalysis of the original data presented in support of the claims about striped nanoparticles. On PubPeer, the authors say that this was a problem when they tried to get it published:
Journals were unwilling to even consider papers which focused on reanalysis of published data (Something which I find very worrying...)
They quote an editor at an unnamed journal that had previously published some of the critiqued work:
"However, [our journal] does NOT publish papers that rely only on existing published data. In other words. [our journal] does NOT publish papers that correct, correlate, reinterpret, or in any way use existing published literature data. We only publish papers with original experimental data."
This is an all too common sentiment. It reminds me of the psychiatrists who, in response to criticism of their paper about bipolar disorder, wrote that:
"[Our critics] view their position as part of a 'debate' about the 'ever-widening bipolar spectrum.' We consider data, not debates, as central to the progress in the scientific understanding of mood disorders…"
Yet data is not science, or understanding, or even knowledge: it's just data. All data needs to be interpreted before it can tell us anything. Sometimes an interpretation will be very simple (e.g. "this image accurately represents the subject"), but as Stirling et al point out, these simple interpretations can be mistaken. Science is the process of understanding the world by drawing the right interpretations from the evidence. A paper that offers a new interpretation of old data is a new piece of scientific work, in every sense of the term, and publishers ought to treat it as such. Much of Darwin's work was reinterpretation, as was almost all of Einstein's - just to name a couple of famous scientific reinterpreters. The idea that new science requires new data might be called hyperempiricism. This is a popular stance among journal editors (perhaps because it makes copyright disputes less likely). Hyperempiricism also appeals to scientists when their work is being critiqued; it allows them to say to critics, "go away until you get some data of your own", even when the dispute is not about the data, but about how it should be interpreted.
Julian Stirling, Ioannis Lekkas, Adam Sweetman, Predrag Djuranovic, Quanmin Guo, Josef Granwehr, Raphaël Lévy, & Philip Moriarty (2013). Critical assessment of the evidence for striped nanoparticles arXiv arXiv: 1312.6812v1