This is how science is supposed to work. Every single experimental finding ought to be checked and checked again before anybody takes it seriously. Alas, only the most famous experiments are ever replicated even once. Hence this effort, which I think is a terrific idea. The Chronicle of Higher Education:
The project is part of Open Science Framework, a group interested in scientific values, and its stated mission is to “estimate the reproducibility of a sample of studies from the scientific literature.” This is a more polite way of saying “We want to see how much of what gets published turns out to be bunk.”Other studies of this sort have found that many and even most published findings cannot be replicated.
For decades, literally, there has been talk about whether what makes it into the pages of psychology journals—or the journals of other disciplines, for that matter—is actually, you know, true. Researchers anxious for novel, significant, career-making findings have an incentive to publish their successes while neglecting to mention their failures. . . . So why not check? Well, for a lot of reasons. It’s time-consuming and doesn’t do much for your career to replicate other researchers’ findings. Journal editors aren’t exactly jazzed about publishing replications. And potentially undermining someone else’s research is not a good way to make friends.
Recently, a scientist named C. Glenn Begley attempted to replicate 53 cancer studies he deemed landmark publications. He could only replicate six. . . . A related new endeavour called Psych File Drawer allows psychologists to upload their attempts to replicate studies. So far nine studies have been uploaded and only three of them were successes.Some skeptics, like John Ioannidis, think that most published scientific "findings" are false. So all praise is due to the founders of The Reproducibility Project, and may many similar efforts follow.
great project! and likely a difficult list of tasks to take on, but noble work. just because a scholarly journal publishes something, it doesn't mean those with the skills to test the results, will. they're on NSF grants and accountable for certain results...and there's only so many hours in a week. so bravo to these folks who will take it on.
ReplyDeleteFantastic project! Unfortunately, very difficult to resource ... and of course it shouldn't be up to others to debunk all the crap: the onus should be on those making claims to provide independent verification, and the amount of crap published is theoretically and practically unbounded. But the whole publication meat-grinder industry would grind to a halt if only sensible and reproducible results were actually permitted to be published.
ReplyDeleteThe usual answer if a particular journal tries to uphold or improve its standards? Publish somewhere else -- or start a new journal! The long-run asymptote of this process is roughly what we see today.