Our single biggest concern when examining research is publication bias, broadly construed. We wonder both (a) how many studies are done, but never published because people don’t find the results interesting or in line with what they had hoped; (b) for a given paper, how many different interpretations of the data were assembled before picking the ones that make it into the final version. The best antidote we can think of is pre-registration of studies along the lines ofClinicalTrials.gov, a service of the U.S. National Institutes of Health. On that site, medical researchers announce their questions, hypotheses, and plans for collecting and analyzing data, and these are published before the data is collected and analyzed. If the results come out differently from what the researchers hope for, there’s then no way to hide this from a motivated investigator.
As the example of the NIH illustrates this is not just a social science problem, it is rife in any science which utilizes statistics. Statistical methods have become metrics to attain by any means necessary, when in reality they should be guidelines to get a better grasp of reality. The only solution to the problem of conscious and unconscious bias in statistical sciences seems to me to be radical transparency of some sort. There's a fair amount of science ethnography which suggests that how science is done departs greatly from the clean and rational enterprise which one might presume based on the final product. The only way to clean up some of the natural human bias in the enterprise is to shed some light on it.