When you walk into a room of people, do you instantly catch the vibe? Can you quickly scan faces and catch the hidden meaning behind the shape of a brow or the twitch of a smile, feel the undertow of emotion when a conversation shifts? Or do other peoples’ meanings and intentions frequently elude you?
Not everyone is equally good at picking up social cues in a given environment, a skill colloquially known as reading the room. Recently, scientists from the University of California, Berkeley, and Japan’s National Institute of Information and Communications Technology in Osaka, set out to understand why.
Across three studies, the researchers found that individual differences in the ability to pick up nonverbal cues stem from idiosyncrasies in the way different people gather, weigh, and integrate facial and contextual information from the environment.
Read more: “How to Tell If You’re a Jerk”
Those who are really good at reading social cues, they found, are able to quickly engage in complex calculus, assessing the relative clarity or ambiguity of different cues so that they can give more weight to the ones with the most obvious meaning. Those who are less good at it, however, try to keep it simple and give equal weight to every piece of information they perceive. The scientists published their findings in Nature Communications.
“We don’t know exactly why these differences occur,” said Jefferson Ortega, a psychology Ph.D. student at the University of California, Berkeley and co-author of the study, in a statement. “But the idea is that some people might use this more simplistic integration strategy because it’s less cognitively demanding, or it could also be due to underlying cognitive deficits.”
To do their experiment, Ortega’s team asked 944 volunteers to guess at the mood of a person in a series of videos, including Hollywood movies, documentaries, and home videos gathered from YouTube. The researchers made the backgrounds in some of the recordings blurry, while others had hazy faces and clear context, in order to isolate the influence of different kinds of information people might use to make their assessments. In a third set of videos, the context and faces were both clear.
Ortega and his colleagues expected that most people would use a method of inference known as Bayesian integration, where they weigh the ambiguities in a set of cues. But only 70 percent of the participants did this. The other 30 percent chose to average cues, no matter how clear or ambiguous they were.
“It was very surprising,” Ortega said. “The computational mechanisms—the algorithm that the brain uses to do that—is not well understood. That’s where the motivation came for this paper. It’s just an amazing feat.”
Something you can think about next time you have to quickly read the room. ![]()
Enjoying Nautilus? Subscribe to our free newsletter.
Lead image: Blueastro / Shutterstock
