Our prehistoric ancestors faced many lethal hazards. Sanitary conditions in the wild weren't great, so people were in constant danger of becoming ill from tainted foods. Worse, while foraging for snacks, they could easily become something else's dinner. On balance, however, our forebears must have evolved good ways of assessing risk, or I wouldn't be writing this—and you wouldn't be reading it.
Recent neurological research suggests that we are innately wired to avoid dangers by calculating odds based on factors that you'd expect as well as others that may surprise you. Let's see how accurate your brain is at calculating risk and what factors it considers in making these assessments.
Brain Recognizing Threat Experiment 1
Here is a list of bad things that might adversely affect you in some way. Rate each item according to how much of a threat you believe it poses to you or your loved ones (zero being "no sweat," 10 being "major concern").
Brain Recognizing Threat Experiment 2
Now let's try to parse out why your brain rated the perceived threat of seven nasties the way it did.
On a scale of zero to 10, rate how ghastly each risk is, how much it feels as if it's imposed on you, and how new it seems. The "ghastly" scale that follows is a measure of awfulness; for instance, a sudden heart attack might rate lower on the ghastly scale than being devoured by an anaconda. The "imposed on you" scale refers to the degree of choice you have in exposing yourself to a particular hazard; zero implies a great deal of choice (e.g., smoking), 10 implies no choice (Alzheimer's). The "new" scale describes how long you have known about a hazard (SARS would be an example of a new threat, the flu an old one).
The bad items that you rated highest for being ghastly, imposed on you, and new probably also got high threat ratings in Experiment 1.
According to Paul Slovic, a psychologist at the University of Oregon, we tend to fret more about risks that can lead to gruesome outcomes, risks that are not of our choosing, and risks that are new regardless of the true level of threat posed by a given hazard.
How accurate are such perceptions of threat? If we define a "true threat" as the probability of exposure to a hazard multiplied by the severity of the impact of the exposure, the seven items from Experiment 1 fall along the continuum below. The graph is based on data taken from Risk: A Practical Guide for Deciding What's Really Safe and What's Really Dangerous in the World Around You (Houghton Mifflin, 2002) by David Ropeik and George Gray of the Harvard Center for Risk Analysis. Notice that HIV is far less a threat than UV sunlight owing to the very low probability of exposure to the former and the high probability and high severity (skin cancer) of the latter.
It may have made sense for our ancestors to fear ghastly, involuntary, or new threats. But in the modern world, where being eaten by a bear is rare, such unconscious biases may lead us to stress out over extremely remote possibilities while ignoring others (like UV radiation and particulate air pollution) that are potentially big problems.
If your ratings in Experiment 1 were significantly different than the true threat levels shown here, leaving your brain alone to assign risk could be risky business.
Many of Paul Slovic's publications are available through the Decision Research Center: www.decisionresearch.org.