The use and misuse of statistics has driven social science into a troublesome cul de sac; Nobel Prize winner Daniel Kahneman has warned of a coming "train wreck" in the profession. The problem is
best illustrated by a very clever prank:
In 2011, a psychologist named Joseph P. Simmons and two colleagues set out to use real experimental data to prove an impossible hypothesis. Not merely improbable or surprising, but downright ridiculous. The hypothesis: that listening to The Beatles’ “When I’m Sixty-Four” makes people younger. The method: Recruit a small sample of undergraduates to listen to either The Beatles song or one of two other tracks, then administer a questionnaire asking for a number of random and irrelevant facts and opinions—their parents’ ages, their restaurant preferences, the name of a Canadian football quarterback, and so on. The result: By strategically arranging their data and carefully wording their findings, the psychologists “proved” that randomly selected people who hear “When I’m Sixty-Four” are, in fact, younger than people who don’t.
The statistical sleight of hand involved in arriving at this result is a little complicated (more on this later), but the authors’ point was relatively simple. They wanted to draw attention to a glaring problem with modern scientific protocol: Between the laboratory and the published study lies a gap that must be bridged by the laborious process of data analysis. As Simmons and his co-authors showed, this process is a virtual black box that, as currently constructed, “allows presenting anything as significant.” And if you can prove anything you want from your data, what, if anything, do you really know?
The answer is, not very much. Trouble has been brewing in the field for years, with reports of unreplicatable experiments and faked data. One day the people who jump so eagerly into the mass media with their dubious findings about "priming" and so on are going to wake up and find that nobody believes anything they say any more. More, from a
great article by Jerry Adler at
Pacific Standard:
Around the same time that Simmons published his tour de force, a paper by the respected Cornell psychologist Daryl Bem claimed to have found evidence that some people can react to events that are about to occur in the near future—a finding as ludicrous-sounding as Simmons’, but one that has been presented by its author as completely legitimate. Bem’s paper set off a frenzy of efforts within the field to debunk his findings. . . . After all, if you can follow all the methods and protocols of science and end up with an impossible result, perhaps there is something wrong with those methods and protocols in the first place.
Something unprecedented has occurred in the last couple of decades in the social sciences. Overlaid on the usual academic incentives of tenure, advancement, grants, and prizes are the glittering rewards of celebrity, best-selling books, magazine profiles, TED talks, and TV appearances. A whole industry has grown up around marketing the surprising-yet-oddly-intuitive findings of social psychology, behavioral economics, and related fields. The success of authors who popularize academic work—Malcolm Gladwell, the Freakonomics guys, and the now-disgraced Jonah Lehrer—has stoked an enormous appetite for usable wisdom from the social sciences. And the whole ecosystem feeds on new, dramatic findings from the lab. “We are living in an age that glorifies the single study,” says Nina Strohminger, a Duke post-doc in social psychology. “It’s a folly perpetuated not just by scientists, but by academic journals, the media, granting agencies—we’re all complicit in this hunger for fast, definitive answers.”
Indeed. And it is not going to end well.
No comments:
Post a Comment