Everyone's been talking about psychologist Uri Simonsohn and his role in the downfall of two scientific fraudsters.
When the story first broke, the methods Simonsohn used that allowed him to spot the dodgy data were mysterious - which only added to the buzz. The paper revealing the approach is now up online and it's a must-read. It's not often a statistics paper offers the train-wrecky schadenfreude of watching two fraudsters' careers come to a well-deserved end.
What's rather disturbing about the article, however, is that it doesn't really contain much that's new, in principle. Simonsohn used statistics to spot data in published papers that was, in effect, 'too good to be true'. He then followed up seemingly dodgy cases with some more stats, using simulations of what real data ought to look like, to verify that it was in fact made up. A simple idea in retrospect but one that's ...