Wednesday, September 21, 2016

Modeling the Crisis in Big Science

Ed Yong:
This is not a new idea. In the 1970s, social scientist Donald Campbell wrote that any metric of quality can become corrupted if people start prioritizing the metric itself over the traits it supposedly reflects. “We realized that his argument works even if individuals aren’t trying to maximize their metrics,” says Smaldino.

He and McElreath demonstrated this by creating a mathematical model in which simulated labs compete with each other and evolve—think SimAcademia. The labs choose things to study, run experiments to test their hypotheses, and try to publish their results. They vary in how much effort they expend in testing their ideas, which affects how many results they get, and how reliable those results are. There’s a trade-off: more effort means truer but fewer publications.

In the model, as in real academia, positive results are easier to publish than negative one, and labs that publish more get more prestige, funding, and students. They also pass their practices on. With every generation, one of the oldest labs dies off, while one of the most productive one reproduces, creating an offspring that mimics the research style of the parent. That’s the equivalent of a student from a successful team starting a lab of their own.

Over time, and across many simulations, the virtual labs inexorably slid towards less effort, poorer methods, and almost entirely unreliable results. And here’s the important thing: Unlike the hypothetical researcher I conjured up earlier, none of these simulated scientists are actively trying to cheat. They used no strategy, and they behaved with integrity. And yet, the community naturally slid towards poorer methods. What the model shows is that a world that rewards scientists for publications above all else—a world not unlike this one—naturally selects for weak science.
So long as scientists are judged almost solely by their publications, this will be very hard to change.

No comments:

Post a Comment