What’s the News: Humans are eerily good at sifting the visual wheat from the chaff—just think of our penchant for word searches, Easter egg hunts, and lushly animated first-person shooters. But how good are we really? To test the limits of these abilities, in a recent study neuroscientists gave subjects extremely difficult, high-speed Where’s Waldo-type search tasks studded with red herrings. But again and again, subjects found what they were looking for, leading the team to report that humans operate at a near-optimal level when it comes to visual searches—a skill that likely came in handy in our evolutionary history. How the Heck:
For a fraction of a second, a cluster of short lines randomly colored gray or black and set at various angles, called “distracters,” flashed before subjects’ eyes. Half of the time, a single line whose orientation didn’t change across images was hidden among them, and subjects indicated whether this target had appeared.
Even with the images whipping by at high speed and the complicating effect of color, humans still detected the target at a level that’s near the best possible success rate, a number that's defined by probability and takes in account how much an observer should weigh each of the pieces of information provided to them. "An optimal observer weights more reliable pieces of sensory evidence more heavily when making a perceptual judgment," the researchers write in the paper. "For example, when two noisy sensory cues about a single underlying stimulus have to be combined, an optimal observer assigns higher weight to the cue that, on that trial, is most reliable." In this study, the angle of the lines was a reliable characteristic---noticing it helped subjects determine if the target line was there---while color was not.
"We found that even in this complex task, people came close to being optimal in detecting the target," the lead researcher said in a press release. "That means that humans can in a split second integrate information across space while taking into account the reliability of that information. That is important in our daily lives."
The team thinks people use groups of networked neurons to perform this breathtakingly quick analysis, and they built a model neural network to show how it could happen.
What’s the Context: The team is interested in whether humans, on the neural level, use a strategy called Bayesian inference
to figure out whether a target object is present. They incorporated information that wasn’t a reliable indicator of the target’s presence—color—into the tests to see how people dealt with it, a key factor in Bayesian inference. The Future Holds: The next step is to up the difficulty of the test and see at what level this preternatural ability to see the target fails. This will give scientists more clues about how visual perception operates on the neural level. Reference: Wei Ji Ma, Vidhya Navalpakkam, Jeffrey M Beck, Ronald van den Berg, Alexandre Pouget. Behavior and neural basis of near-optimal visual search. Nature Neuroscience, 2011; DOI: 10.1038/nn.2814
Image credit: darkpatator/flickr