Do you believe that people’s eyes emit an invisible beam of force?
According to a rather fun paper in PNAS, you probably do, on some level, believe that. The paper is called Implicit model of other people’s visual attention as an invisible, force-carrying beam projecting from the eyes.
To show that people unconsciously believe in eye-beams, psychologists Arvid Guterstam et al. had 157 MTurk volunteers perform a computer task in which they had to judge the angle at which paper tubes would lose balance and tip over. At one side of the screen, a man was shown staring at the tube.
The key result was that volunteers rated the tube more likely to tip over if it was tilted in the direction away from the man gazing at it – as if the man’s eyes were pushing the tube away. The effect was small, with a difference in the estimated tip-angle of just 0.67 degrees between tipping-away and tipping-towards the man, but it was significant (p=0.006). No such effect was seen if the man was blindfolded, suggesting that his eyes had to be visible in order for the sense of force to be felt.
Some smaller follow-up experiments replicated the effect and also showed (Experiment 4) that the effect didn’t work if participants were told the tube was full of heavy concrete, which is consistent with the idea that people believed the eye-beams to be very weak.
Guterstam et al. conclude that:
This is a fun paper because the belief that vision involves a force or beam coming out from the eyes is actually a very old one. The theory is called “extramission” and it was popular among the ancient Greeks, but few people would admit to believing in eye-beams today – even if the concept is well known in recent fiction:
In fact, Guterstam et al. quizzed the volunteers in this study and found that only about 5% explicitly endorsed a belief in extramission. Excluding these believers didn’t change the experimental results.
This study seems fairly solid, although it seems a little fortuitious that the small effect found by the n=157 Experiment 1 was replicated in the much smaller (and hence surely underpowered) follow-up experiments 2 and 3C. I also think the stats are affected by the old erroneous analysis of interactions error (i.e. failure to test the difference between conditions directly) although I’m not sure if this makes much difference here.