This year we give thanks for a concept that has been particularly useful in recent times: the error bar.
Error bars are a simple and convenient way to characterize the expected uncertainty in a measurement or for that matter the expected accuracy of a prediction. In a wide variety of circumstances (though certainly not always), we can characterize uncertainties by a normal distribution -- the bell curve made famous by Gauss.
Sometimes the measurements are a little bigger than the true value, sometimes they're a little smaller. The nice thing about a normal distribution is that it is fully specified by just two numbers -- the central value, which tells you where it peaks, and the standard deviation, which tells you how wide it is.
The simplest way of thinking about an error bar is as our best guess at the standard deviation of what the underlying distribution of our ...