Ever catch yourself saying “uhhh” too many times? Many people vow to cut back on relying on such verbal crutches once they realize they’re using them, but they’re not just filler. It seems they act as a cue of sorts for your conversational partners. Researchers from the Max Planck Institute for Psycholinguistics found that listeners actively track when a speaker says “uh” to help predict what kind of word might follow.
The Power of Uh
Based on previous research, psycholinguist Hans Rutger Bosker and his team already knew that people sprinkle their speech with so-called disfluencies, the um’s, ah’s, uh’s and pauses that we often unwittingly slip into conversation. They also knew that these disfluencies usually cropped up before someone said a word that wasn’t in their everyday vernacular.
But to find out if listeners actually paid attention to disfluencies, Bosker’s team set up an experiment that utilized eye-tracking technology. On a computer screen, participants saw two images: one common, like a hand, and one more uncommon, like an igloo. While they were looking at the screen, the volunteers listened to two different kinds of talkers: one who uttered a disfluency before an uncommon word, as you’d expect, and another so-called atypical talker who um-ed and ah-ed before a common word.
Depending on which kind of talker they were listening to, the participants adjusted their expectations. If they were following along with a typical talker, the eye-tracking tech caught listener’s eyes darting to the uncommon image, like the igloo, right after they heard a disfluency. And after listening to an atypical talker for a bit, participants eventually started flicking their eyes toward the common item on the screen, like the hand, after they heard the verbal hesitation.
“We take this as evidence that listeners actively keep track of when and where talkers say ‘uh’ in spoken communication, adjusting what they predict will come next for different talkers,” says Bosker in a press release.
Uh With a Twist
Taking their experiment a step further, the researchers tested whether this phenomenon held up when the person talking had a foreign accent. If the accented speaker was a typical one — using an “um” or an “uh” before saying an uncommon word — listeners eventually adjusted, glancing at the word’s corresponding image on the screen. However, if the speaker was atypical, hesitating before a common word, listeners never adjusted.
“This probably indicates that hearing a few atypical … instructions led listeners to infer that the non-native speaker had difficulty naming even simple words,” says Geertje van Bergen, one of the paper’s co-authors, in the press release. So listeners likely took the odd “ums” and “ahs” as unreliable cues for what kind of word might be coming up next.
Together, these findings, published today in the Journal of Memory and Language, are the first evidence of what researchers call distributional learning in this kind of setting. “We’ve known about disfluencies triggering prediction for more than 10 years now,” says Bosker in the press release. “But we demonstrate that these predictive strategies are malleable. People actively track when particular talkers say ‘uh’ on a moment by moment basis, adjusting their predictions about what will come next.”