This is how science is supposed to work. Every single experimental finding ought to be checked and checked again before anybody takes it seriously. Alas, only the most famous experiments are ever replicated even once. Hence this effort, which I think is a terrific idea. The Chronicle of Higher Education:
The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”Other studies of this sort have found that many and even most published findings cannot be replicated.
For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. . . . So why not check? Well, for a lot of reasons. It’s time-consuming and doesn’t do much for your career to replicate other researchers’ findings. Journal editors aren’t exactly jazzed about publishing replications. And potentially undermining someone else’s research is not a good way to make friends.
Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. . . . A related new endeavour called Psych File Drawer allows psychologists to upload their attempts to replicate studies. So far nine studies have been uploaded and only three of them were successes.Some skeptics, like John Ioannidis, think that most published scientific "findings" are false. So all praise is due to the founders of The Reproducibility Project, and may many similar efforts follow.