If we ever contact extraterrestrials, we’ll have to find a way to understand them. Who are they? What are their intentions? What have they discovered that we haven’t? Olaf Witkowski thinks the only way to begin that dialogue is to try and kill them.
Clearly, there are going to be major differences between us and them. Biological, technological, and cultural gaps are likely to be as wide as interstellar space itself. “The only way to communicate with a creature that is very different from you, and you can make no assumptions at all about how they encode language or meaning, is just killing them,” Witkowski says.
He argues that the only universal basis of communication, the sole feature that all life shares, whatever its form—because it is built into the very definition of life—is that life wants to live. It strives to maintain itself, because if it didn’t, it wouldn’t survive the depredations of the world.
Living entities have to be “replicating or maintaining themselves in a homeostatic loop,” Witkowski says. “Otherwise, they wouldn’t be there.” They will be experts at detecting threats to survival. “So, you try to hurt them. Then they will understand.”
Witkowski hasn’t worked out how threatening ET would open a door to communication rather than shut it rather firmly. In Stanislaw Lem’s final novel, Fiasco, humans (spoiler alert) send a ship to contact aliens on a distant planet and, when they don’t respond to radio messages, attack. That does get the aliens to answer, but the consequences are evident from the book title.
The only universal basis of communication, the sole feature that all life shares, is that life wants to live.
Still, in Witkowski’s scenario, ET’s instinct to survive tells us it’s a form of life, something we share. Perhaps, then, we could turn around and help it survive. “Now we can start from something they value,” Witkowski says. “So they will hear us.” And that could be the beginning of a beautiful friendship.
A soft-spoken researcher on artificial life and intelligence, Witkowski is an unlikely advocate for a warmongering view of interstellar interchange. He is monk-like in his serenity and once considered taking his vows. “I even joined some religious communities as a teenager and have sometimes considered a monastic life,” he says.
Born to a Vietnamese mother and Polish father, growing up in Belgium, studying in Spain, now living in Japan, Witkowski speaks six languages fluently and can get by in another six. For his dissertation, he analyzed how communication enables cooperation among AIs or other cognitive systems. Yet despite his linguistic superpowers, Witkowski feels that communication is such a fraught act, presupposing a background of shared knowledge and motivations, that we might scarcely even recognize a message from beyond Earth, let alone decipher it. Humans can often barely communicate among themselves.
Pioneers of the search for extraterrestrial intelligence recognized the challenge, but many assumed that mathematics and physics could serve as a cosmic lingua franca. Our radio signals or laser pulses might tap out a sequence of prime numbers, for example—a prime on Earth is a prime on Alpha Centauri Ca—and build up from there.
In 1966, Carl Sagan wrote about the tests of this principle which he and Frank Drake had conducted. Once he gave a sample message to eminent scientists at a party in Cambridge, Mass., and asked them to figure it out. They couldn’t. (He does not mention whether those scientists ever came back to one of his parties.)
Last year, a trio of mathematicians showed how to recognize such messages as artificial in origin and to reconstruct their basic format. Whether we could make any sense of them is still questionable. Although mathematical truths may be universal, their expression is culturally specific, and even if we manage to translate them, the resulting phrasebook may not help us communicate other ideas.
In a 2014 paper, anthropologist Ben Finney—who has collaborated with SETI scientists to study historical precedents of intercultural contact—wrote that European scholars used to think they could translate ancient Mayan hieroglyphs based on math and astronomy. They didn’t get far. Ultimately, they had to relate the glyphs to modern spoken Maya—in effect, relying on an oral Rosetta Stone. We won’t have that option with extraterrestrials.
Some wonder whether the inherent difficulties of communication explain the Great Silence—the failure, apart from a few tantalizing but equivocal hints, to detect alien signals or Galactic empire-building. Maybe we are in fact surrounded by aliens or their artifacts and don’t recognize them. They might elude us because they think a billion times faster or slower, are tucked into nanometer-level structures, or do not have bodies but exist as diffuse patterns. Commenting on this possibility in this same volume as Finney’s paper, archaeologist Paul Wason noted that we routinely misinterpret human creations as natural phenomena. An untrained eye takes a Paleolithic tool for an ordinary rock.
SETI researchers talk about communicating information, starting with prime numbers, but hopefully leading to a cure for cancer, a unified theory of physics, and all the other wisdom an advanced civilization could offer. But communication is not only, or even primarily, about information. It is about emotion, about establishing our presence and developing or reinforcing a connection. When you ask someone “How are you?” do you honestly care?
The growing literature on science denial counsels us that we can’t change anyone’s mind with facts. We have to establish a bond first. One sad realization I’ve come to in my career as a science writer is that most readers—present company excepted, of course—seek not information, but validation. Even before social media, they judged an article on, say, climate change not by its data or arguments, but on whether they agreed with it. If they did, we were duly scientific; if they didn’t, we were hopelessly politicized.
In 2014, philosopher Tomislav Janović argued that extraterrestrial communication, too, will be affective. “The intention is to simply reveal our presence as intentional beings,” he wrote. “For it is much more likely that they will be able to empathically recognize such an intention than to interpret a signal embodying an explicit representational content.”
To be sure, even the presence and structure of a message, whether or not it is ever decoded, will provide some information. It would certainly quiet the biologists who think intelligent life is such an evolutionary fluke on Earth that it will be vanishingly rare in the galaxy. It would indicate that intelligent life is not self-sabotaging, dispelling the fatalism that is all too easy to feel these days. And over time it might well blossom into an information-bearing channel.
It will behoove us to create an emotional bond with the first super-intelligent alien species we encounter.
Some even suggest that conscious experience is a form of affective self-communication, grounded in how we process our bodily states, which we experience as emotional states. Neuroscientists Antonio Damasio, Mark Solms, and Anil Seth have described consciousness as a bodily self-assessment as opposed to a cognitive function. We evolved it to survive in a fickle environment.
“The germ of consciousness and feeling comes from giving a damn about yourself in this world,” neuroscientist Kingson Man told me. “That is ultimately the dividing line between living and nonliving.”
Damasio and Man suggest that physical vulnerability is also the missing ingredient for artificial general intelligence. In 2023, they and Hartmut Neven at Google created a neural network that can recognize handwritten digits—a standard machine-learning test case—while adding the novelty that in performing the task, the network also affected its own ability to perform the task. It was like computational beer pong: If you lose a point, you drink and make it more likely you’ll lose the next point. The network rose to the occasion. It learned not only to perform the task, but to adapt more quickly than a regular network when the researchers changed the rules.
Damasio, Man, and other authors have also suggested that vulnerability would help with the AI alignment problem. If the machine is vulnerable—so that it needs to devote resources to maintaining its own functioning—it may recognize that humans are vulnerable, too, which is the basis for empathy and an impetus to achieve mutually desirable outcomes. Such a machine will be less likely to launch a first strike against us, they argue. It is a fair bet that advanced AIs will be the first super intelligent alien species we encounter, so it behooves us to create an emotional bond with them.
SETI researchers commonly assume that extraterrestrials will be as advanced morally as they are technologically, if only because they would have wiped themselves out if they weren’t. So we have no need to fear them. Besides, Earth doesn’t have a lot to offer that couldn’t be obtained more easily and plentifully elsewhere in the solar system or galaxy; expending vast quantities of energy to cross interstellar space in search of energy sources seems perverse. And if rapacious civilizations were out there, we should have already been invaded.
Others think we shouldn’t be so sanguine. If the old Darwinian logic holds, aggressors shall inherit the galaxy. Even the most enlightened alien civilization will have aggressive factions. Earth might well have some resources they want, such as the products of life itself. As in the Predator film series, they may seek out conflict for its own sake.
But Witkowski sees a third possibility. Maybe the extraterrestrials are trying to understand us. By invading Earth and trying to kill people, the invaders may be saying, “We just want to talk.”
Lead image: Tasnuva Elahi; with images by Sky vectors and kaiwut niponkaew / Shutterstock