ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .

Unravel the biggest ideas in science today. Become a more curious you.

Unravel the biggest ideas in science today. Become a more curious you.

The full Nautilus archive eBooks & Special Editions Ad-free reading

  • The full Nautilus archive
  • eBooks & Special Editions
  • Ad-free reading
Join
Explore

When you walk into a room of people, do you instantly catch the vibe? Can you quickly scan faces and catch the hidden meaning behind the shape of a brow or the twitch of a smile, feel the undertow of emotion when a conversation shifts? Or do other peoples’ meanings and intentions frequently elude you?

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Not everyone is equally good at picking up social cues in a given environment, a skill colloquially known as reading the room. Recently, scientists from the University of California, Berkeley, and Japan’s National Institute of Information and Communications Technology in Osaka, set out to understand why.

Across three studies, the researchers found that individual differences in the ability to pick up nonverbal cues stem from idiosyncrasies in the way different people gather, weigh, and integrate facial and contextual information from the environment.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Read more: “How to Tell If You’re a Jerk

Those who are really good at reading social cues, they found, are able to quickly engage in complex calculus, assessing the relative clarity or ambiguity of different cues so that they can give more weight to the ones with the most obvious meaning. Those who are less good at it, however, try to keep it simple and give equal weight to every piece of information they perceive. The scientists published their findings in Nature Communications.

“We don’t know exactly why these differences occur,” said Jefferson Ortega, a psychology Ph.D. student at the University of California, Berkeley and co-author of the study, in a statement. “But the idea is that some people might use this more simplistic integration strategy because it’s less cognitively demanding, or it could also be due to underlying cognitive deficits.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

To do their experiment, Ortega’s team asked 944 volunteers to guess at the mood of a person in a series of videos, including Hollywood movies, documentaries, and home videos gathered from YouTube. The researchers made the backgrounds in some of the recordings blurry, while others had hazy faces and clear context, in order to isolate the influence of different kinds of information people might use to make their assessments. In a third set of videos, the context and faces were both clear.

Ortega and his colleagues expected that most people would use a method of inference known as Bayesian integration, where they weigh the ambiguities in a set of cues. But only 70 percent of the participants did this. The other 30 percent chose to average cues, no matter how clear or ambiguous they were.

“It was very surprising,” Ortega said. “The computational mechanisms—the algorithm that the brain uses to do that—is not well understood. That’s where the motivation came for this paper. It’s just an amazing feat.”

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Something you can think about next time you have to quickly read the room.

Enjoying  Nautilus? Subscribe to our free newsletter.

Lead image: Blueastro / Shutterstock

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Fuel your wonder. Feed your curiosity. Expand your mind.

Access the entire Nautilus archive,
ad-free on any device.
1/2
FREE ARTICLES THIS MONTH
Become a Nautilus member for unlimited, ad-free access.
Subscribe now
2/2
FREE ARTICLES THIS MONTH
This is your last free article. Get full access, without ads.
Subscribe now