Scientific peer review is based on the idea that some papers deserve to get published and others don't.
By asking a hand-picked team of 3 or 4 experts in the field (the "peers"), journals hope to accept the good stuff, filter out the rubbish, and improve the not-quite-good-enough papers.
This all assumes that the reviewers, being experts, are able to make a more or less objective judgement. In other words, when a reviewer says that a paper's good or bad, they're reporting something about the paper, not just giving their own personal opinion.
If that's true, reviewers ought to agree with each other about the merits of each paper. On the other hand, if it turns out that they don't agree any more often than we'd expect if they were assigning ratings entirely at random, that would suggest that there's a problem somewhere.
Guess what?Bornmann et al have just reported ...