Nautilus Members enjoy an ad-free experience. or Join now .

Many of us have been taught that pronouncing vowels indistinctly and dropping consonants are symptoms of slovenly speech, if not outright disregard for the English language. The Irish playwright St. John Ervine viewed such habits as evidence that some speakers are “weaklings too languid and emasculated to speak their noble language with any vigor.” If that’s so, then we are swimming in a sea of linguistic wimpiness; Keith Johnson found that speakers relaxed or dropped sounds in more than 60 percent of words spoken in conversation. Happily, the science of mumbling offers a far less judgmental—and more captivating—account of our imperfectly crisp pronunciations.

Far from being a symptom of linguistic indifference or moral decay, dropping or reducing sounds displays an underlying logic similar to the data-compression schemes that are used to create MP3s and JPEGs. These algorithms trim down the space needed to digitally store sounds and images by throwing out information that is redundant or doesn’t add much to our perceptual experience—for example, tossing out data at sound frequencies we can’t hear, or not bothering to encode slight gradations of color that are hard to see. The idea is to keep only the information that has the greatest impact.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Mumbling—or phonetic reduction, as language scientists prefer to call it—appears to follow a similar strategy. Not all words are equally likely to be reduced. In speech, you’re more likely to reduce common words like fine than uncommon words like tine. You’re also more likely to reduce words if they’re predictable in the context, so that the word fine would be pronounced less distinctly in a sentence like “You’re going to be just fine” than “The last word in this sentence is fine.” This suggests that speakers, at a purely unconscious level, strategically preserve information when it’s needed, but often leave it out when it doesn’t offer much communicative payoff. Speaking is an effortful, cognitively expensive activity, and by streamlining where they can, speakers may ultimately produce better-designed, more fluent sentences.

This kind of linguistic data compression is not limited to pronunciation: It also drives decisions about whether to utter or omit certain words. You’re far more likely to specify that your neighbor is a female police officer or a male nurse than if the genders were reversed. Since most police officers have been male and most nurses female, historically, gender is fairly predictable in the usual case; precious cognitive energy is reserved for the anomalous cases, where the words male and female are more useful.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Far from being a symptom of linguistic indifference or moral decay, dropping or reducing sounds displays an underlying logic similar to the data-compression schemes that are used to create MP3s and JPEGs.

The notion of strategic laziness, in which effort and informational value are judiciously balanced against each other, scales up beyond individual speakers to entire languages, helping to explain why they have certain properties. For example, it offers some insight into why languages tolerate massive amounts of ambiguity in their vocabularies: Speakers can recycle easy-to-pronounce words and phrases to take on multiple meanings, in situations where listeners can easily recover the speaker’s intent. It has also been invoked to explain the fact that across languages, the most common words tend to be short, carrying minimal amounts of phonetic information, and to account for why languages adopt certain word orders.

You can also see strategic data compression in action by inspecting color vocabularies across languages. Some languages make do with just three or four distinct words for color; for example, the Lele language, spoken by tens of thousands of people in Chad, uses a single word to encompass yellow, green, and blue. Languages with minimalist color vocabularies tend to be spoken in pre-industrial societies, where there are very few manufactured objects to which color has been artificially applied. This means that speakers mostly refer to natural objects, for which color is highly predictable, just as gender has traditionally been for nurses or police officers. If you think back to the last time you asked someone to go out and cut the green grass or buy you some yellow bananas, it becomes easier to see how a language might get by without an abundant menu of color words—especially in an area without a profusion of consumer products.

While there are many reasons to believe language involves a great deal of data compression without catastrophic loss of meaning, scientists still know very little about how speakers figure out exactly what information they can afford to leave out and when. The data-compression algorithms used to create MP3 files are based on scores of psychoacoustic experiments that probed the fine points of human auditory perception. Do speakers have implicit theories about what information is most essential to their listeners? If so, what do these theories look like, and how do speakers arrive at them? And what to make of the fact that people do sometimes mumble unintelligibly, throwing out either too much information or the wrong kind? (Also see Aatish Bhatia’s earlier post, “The Math Trick Behind MP3s, JPEGs, and Homer Simpson’s Face.”

Nautilus Members enjoy an ad-free experience. Log in or Join now .

We also don’t know how well speakers tune their data-compression algorithms to the needs of individual listeners. Accurately predicting the information that a listener can easily recover sometimes requires knowing a lot about his previous experience or knowledge. After all, one person’s redundancy can be another person’s anomaly, as was made clear by an exchange I once had with a fellow plane passenger. We were departing the city of Calgary, next to the Canadian Rockies. My companion, who was heading home to Florida, told me that he’d had a lovely vacation with his family, spending several days snow skiing in the mountains. To my Canadian ears, this sounded odd—doesn’t skiing usually involve snow? I asked if he would ever just use the term skiing. Well yes, he explained patiently. But then, that would be on the water.

Julie Sedivy teaches linguistics and psychology at the University of Calgary, and trades information on Twitter @soldonlanguage.

close-icon Enjoy unlimited Nautilus articles, ad-free, for as little as $4.92/month. Join now

! There is not an active subscription associated with that email address.

Join to continue reading.

Access unlimited ad-free articles, including this one, by becoming a Nautilus member. Enjoy bonus content, exclusive products and events, and more — all while supporting independent journalism.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member. $9.99/month. Cancel anytime.