Via Scott Siskind, an interesting study of peer review:
Peer-review is widely used throughout academia, most notably in the publication of journal articles and the allocation of research grants. Yet peer-review has been subject to much criticism, including being slow, unreliable, subjective and potentially prone to bias. This paper contributes to this literature by investigating the consistency of peer-reviews and the impact they have upon a high-stakes outcome (whether a research grant is funded). Analysing data from 4,000 social science grant proposals and 15,000 reviews, this paper illustrates how the peer-review scores assigned by different reviewers have only low levels of consistency (a correlation between reviewer scores of only 0.2).
These authors also found that a single negative review has a major impact on whether studies are accepted. I imagine this is because editors know that some reviews are just back scratching among friends, so they are prone to disregard positive reviews.
Of course this raises the question: if peer review is slow, unreliable, and prone to bias, what would be better?