ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

You’ve been hoaxed.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

The hoax seems harmless enough. A few thousand AI researchers have claimed that computers can read and write literature. They’ve alleged that algorithms can unearth the secret formulas of fiction and film. That Bayesian software can map the plots of memoirs and comic books. That digital brains can pen primitive lyrics1 and short stories—wooden and weird, to be sure, yet evidence that computers are capable of more.

But the hoax is not harmless. If it were possible to build a digital novelist or poetry analyst, then computers would be far more powerful than they are now. They would in fact be the most powerful beings in the history of Earth. Their power would be the power of literature, which although it seems now, in today’s glittering silicon age, to be a rather unimpressive old thing, springs from the same neural root that enables human brains to create, to imagine, to dream up tomorrows. It was the literary fictions of H.G. Wells that sparked Robert Goddard to devise the liquid-fueled rocket, launching the space epoch; and it was poets and playwrights—Homer in The Iliad, Karel Čapek in Rossumovi Univerzální Roboti—who first hatched the notion of a self-propelled metal robot, ushering in the wonder-horror of our modern world of automata.

At the bottom of literature’s strange and branching multiplicity is an engine of causal reasoning.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

If computers could do literature, they could invent like Wells and Homer, taking over from sci-fi authors to engineer the next utopia-dystopia. And right now, you probably suspect that computers are on the verge of doing just so: Not too far in the future, maybe in my lifetime even, we’ll have a computer that creates, that imagines, that dreams. You think that because you’ve been duped by the hoax. The hoax, after all, is everywhere: college classrooms, public libraries, quiz games, IBM, Stanford, Oxford, Hollywood. It’s become such a pop-culture truism that Wired enlisted an algorithm, SciFiQ, to craft “the perfect piece of science fiction.”2

Yet despite all this gaudy credentialing, the hoax is a complete cheat, a total scam, a fiction of the grossest kind. Computers can’t grasp the most lucid haiku. Nor can they pen the clumsiest fairytale. Computers cannot read or write literature at all. And they never, never will.

I can prove it to you.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Computers possess brains of unquestionable brilliance, a brilliance that dates to an early spring day in 1937 when a 21-year-old master’s student found himself puzzling over an ungainly contraption that looked like three foosball tables pressed side-to-side in an electrical lab at the Massachusetts Institute of Technology.

The student was Claude Shannon. He’d earned his undergraduate diploma a year earlier from the University of Michigan, where he’d become fascinated with a system of logic devised during the 1850s by George Boole, a self-taught Irish mathematician who’d managed to vault himself, without a university degree, into an Algebra professorship at Queen’s College, Cork. And eight decades after Boole pulled off that improbable leap, Shannon pulled off another. The ungainly foosball contraption that sprawled before him was a “differential analyzer,” a wheel-and-disc analogue computer that solved physics equations with the help of electronic switchboards. Those switchboards were a convoluted mess of ad hoc cables and transistors that seemed to defy reason when suddenly Shannon had a world-changing epiphany: Those switchboards and Boole’s logic spoke the same language. Boole’s logic could simplify the switchboards, condensing them into circuits of elegant precision. And the switchboards could then solve all of Boole’s logic puzzles, ushering in history’s first automated logician.

The hoax is everywhere: college classrooms, IBM, Stanford, Oxford, Hollywood.

With this jump of insight, the architecture of the modern computer was born. And as the ensuing years have proved, the architecture is one of enormous potency. It can search a trillion webpages, dominate strategy games, and pick lone faces out of a crowd—and every day, it stretches still further, automating more of our vehicles, dating lives, and daily meals. Yet as dazzling as all these tomorrow-works are, the best way to understand the true power of computer thought isn’t to peer forward into the future fast-approaching. It’s to look backward in time, returning our gaze to the original source of Shannon’s epiphany. Just as that epiphany rested on the earlier insights of Boole, so too did Boole’s insights3 rest on a work more ancient still: a scroll authored by the Athenian polymath Aristotle in the fourth century B.C.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The scroll’s title is arcane: Prior Analytics. But its purpose is simple: to lay down a method for finding the truth. That method is the syllogism. The syllogism distills all logic down to three basic functions: AND, OR, NOT. And with those functions, the syllogism unerringly distinguishes what’s TRUE from what’s FALSE.

So powerful is Aristotle’s syllogism that it became the uncontested foundation of formal logic throughout Byzantine antiquity, the Arabic middle ages, and the European Enlightenment. When Boole laid the mathematical groundwork for modern computing, he could begin by observing:

The subject of Logic stands almost exclusively associated with the great name of Aristotle. As it was presented to ancient Greece … it has continued to the present day.

This great triumph prompted Boole to declare that Aristotle had identified “the fundamental laws of those operations of the mind by which reasoning is performed.” Inspired by the Greek’s achievement, Boole decided to carry it one step further. He would translate Aristotle’s syllogisms into “the symbolical language of a Calculus,” creating a mathematics that thought like the world’s most rational human.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

In 1854, Boole published his mathematics as The Laws of Thought. The Laws converted Aristotle’s FALSE and TRUE into two digits—zero and 1—that could be crunched by AND-OR-NOT algebraic equations. And 83 years later, those equations were given life by Claude Shannon. Shannon discerned that the differential analyzer’s electrical off/on switches could be used to animate Boole’s 0/1 bits. And Shannon also experienced a second, even more remarkable, realization: The same switches could automate Boole’s mathematical syllogisms. One arrangement of off/on switches could calculate AND, and a second could calculate OR, and a third could calculate NOT, Frankensteining an electron-powered thinker into existence.

Shannon’s mad-scientist achievement established the blueprint for the computer brain. That brain, in homage to Boole’s arithmetic and Aristotle’s logic, is known now as the Arithmetic Logic Unit or ALU. Since Shannon’s breakthrough in 1937, the ALU has undergone a legion of upgrades: Its clunky off/on switch-arrangements have shrunk to miniscule transistors, been renamed logic gates, multiplied into parallel processors, and used to perform increasingly sophisticated styles of mathematics. But through all these improvements, the ALU’s core design has not changed. It remains as Shannon drew it up, an automated version of the syllogism, so syllogistic reasoning is the only kind of thinking that computers can do. Aristotle’s AND-OR-NOT is hardwired in.

This hardwiring has hardly seemed a limitation. In the late 19th century, the American philosopher C.S. Peirce deduced that AND-OR-NOT could be used to compute the essential truth of anything: “mathematics, ethics, metaphysics, psychology, phonetics, optics, chemistry, comparative anatomy, astronomy, gravitation, thermodynamics, economics, the history of science, whist, men and women, wine, meteorology.” And in our own time, Peirce’s deduction has been bolstered by the advent of machine learning. Machine learning marshals the ALU’s logic gates to perform the most astonishing feats of artificial intelligence, enabling Google’s DeepMind, IBM’s Watson, Apple’s Siri, Baidu’s PaddlePaddle, and Amazon’s Web Services to reckon a person’s odds of getting sick, alert companies to possible frauds, winnow out spam, become a whiz at multiplayer video games, and estimate the likelihood that you’d like to purchase something you don’t even know exists.

Although these remarkable displays of computer cleverness all originate in the Aristotelian syllogisms that Boole equated with the human mind, it turns out that the logic of their thought is different from the logic that you and I typically use to think.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Very, very different indeed.

The difference was detected back in the 16th century.

It was then that Peter Ramus, a half-blind, 20-something professor at the University of Paris, pointed out an awkward fact that no reputable academic had previously dared to admit: Aristotle’s syllogisms were extremely hard to understand.4 When students first encountered a syllogism, they were inevitably confused by its truth-generating instructions:

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

If no β is α, then no α is β, for if some α (let us say δ) were β, then β would be α, for δ is β. But if all β is α, then some α is β, for if no α were β, then no β could be α …

And even after students battled through their initial perplexity, valiantly wrapping their minds around Aristotle’s abstruse mathematical procedures, it still took years to acquire anything like proficiency in Logic.

This, Ramus thundered, was oxymoronic. Logic was, by definition, logical. So, it should be immediately obvious, flashing through our mind like a beam of clearest light. It shouldn’t slow down our thoughts, requiring us to labor, groan, and painstakingly calculate. All that head-strain was proof that Logic was malfunctioning—and needed a fix.

Ramus’ denunciation of Aristotle stunned his fellow professors. And Ramus then startled them further. He announced that the way to make Logic more intuitive was to turn away from the syllogism. And to turn toward literature.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Do we make ourselves more logical by using computers? Or by reading poetry?

Literature exchanged Aristotle’s AND-OR-NOT for a different logic: the logic of nature. That logic explained why rocks dropped, why heavens rotated, why flowers bloomed, why hearts kindled with courage. And by doing so, it equipped us with a handbook of physical power. Teaching us how to master the things of our world, it upgraded our brains into scientists.

Literature’s facility at this practical logic was why, Ramus declared, God Himself had used myths and parables to convey the workings of the cosmos. And it was why literature remained the fastest way to penetrate the nuts and bolts of life’s operation. What better way to grasp the intricacies of reason than by reading Plato’s Socratic dialogues? What better way to understand the follies of emotion than by reading Aesop’s fable of the sour grapes? What better way to fathom war’s empire than by reading Virgil’s Aeneid? What better way to pierce that mystery of mysteries—love—than by reading the lyrics of Joachim du Bellay?

Inspired by literature’s achievement, Ramus tore up Logic’s traditional textbooks. And to communicate life’s logic in all its rich variety, he crafted a new textbook filled with sonnets and stories. These literary creations explained the previously incomprehensible reasons of lovers, philosophers, fools, and gods—and did so with such graceful intelligence that learning felt easy. Where the syllogisms of Aristotle had ached our brains, literature knew just how to talk so that we’d comprehend, quickening our thoughts to keep pace with its own.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Ramus’ new textbook premiered in the 1540s, and it struck thousands of students as a revelation. For the first time in their lives, those students opened a Logic primer—and felt the flow of their innate method of reasoning, only executed faster and more precisely. Carried by a wave of student enthusiasm, Ramus’ textbooks became bestsellers across Western Europe, inspiring educators from Berlin to London to celebrate literature’s intuitive logic: “Read Homer’s Iliad and that most worthy ornament of our English tongue, the Arcadia of Sir Philip Sidney—and see the true effects of Natural Logic, far different from the Logic dreamed up by some curious heads in obscure schools.”5

Four-hundred years before Shannon, here was his dream of a logic-enhancer—and yet the blueprint was radically different. Where Shannon tried to engineer a go-faster human mind with electronics, Ramus did it with literature.

So who was right? Do we make ourselves more logical by using computers? Or by reading poetry? Does our next-gen brain lie in the CPU’s Arithmetic Logic Unit? Or in the fables of our bookshelf?

To our 21st-century eyes, the answer seems obvious: The AND-OR-NOT logic of Aristotle, Boole, and Shannon is the undisputed champion. Computers—and their syllogisms—rule our schools, our offices, our cars, our homes, our everything. Meanwhile, nobody today reads Ramus’ textbook. Nor does anyone see literature as the logic of tomorrow. In fact, quite the opposite: Enrollments in literature classes at universities worldwide are contracting dramatically. Clearly, there is no “natural logic” inside our heads that’s accelerated by the writings of Homer and Maya Angelou.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Except, there is. In a recent plot twist, neuroscience has shown that Ramus got it right.

Our neurons can fire—or not.

This basic on/off function, observed pioneering computer scientist John von Neumann, makes our neurons appear similar—even identical—to computer transistors. Yet transistors and neurons are different in two respects. The first difference was once thought to be very important, but is now viewed as basically irrelevant. The second has been almost entirely overlooked, but is very important indeed.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

The first—basically irrelevant—difference is that transistors speak in digital while neurons speak in analogue. Transistors, that is, talk the TRUE/FALSE absolutes of 1 and 0, while neurons can be dialed up to “a tad more than 0” or “exactly ¾.” In computing’s early days, this difference seemed to doom artificial intelligences to cogitate in black-and-white while humans mused in endless shades of gray. But over the past 50 years, the development of Bayesian statistics, fuzzy sets, and other mathematical techniques have allowed computers to mimic the human mental palette, effectively nullifying this first difference between their brains and ours.

The second—and significant—difference is that neurons can control the direction of our ideas. This control is made possible by the fact that our neurons, as modern neuroscientists and electrophysiologists have demonstrated, fire in a single direction: from dendrite to synapse. So when a synapse of neuron A opens a connection to a dendrite of neuron Z, the ending of A becomes the beginning of Z, producing the one-way circuit A → Z.

This one-way circuit is our brain thinking: A causes Z. Or to put it technically, it’s our brain performing causal reasoning.

The best that computers can do is spit out word soups. They leave our neurons unmoved.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Causal reasoning is the neural root of tomorrow-dreaming teased at this article’s beginning. It’s our brain’s ability to think: this-leads-to-that. It can be based on some data or no data—or even go against all data. And it’s such an automatic outcome of our neuronal anatomy that from the moment we’re born, we instinctively think in its story sequences, cataloguing the world into mother-leads-to-pleasure and cloud-leads-to-rain and violence-leads-to-pain. Allowing us, as we grow, to invent afternoon plans, personal biographies, scientific hypotheses, business proposals, military tactics, technological blueprints, assembly lines, political campaigns, and other original chains of cause-and-effect.

But as natural as causal reasoning feels to us, computers can’t do it. That’s because the syllogistic thought of the computer ALU is composed of mathematical equations, which (as the term “equation” implies) take the form of A equals Z. And unlike the connections made by our neurons, A equals Z is not a one-way route. It can be reversed without changing its meaning: A equals Z means exactly the same as Z equals A, just as 2 + 2 = 4 means precisely the same as 4 = 2 + 2.

This feature of A equals Z means that computers can’t think in A causes Z. The closest they can get is “if-then” statements such as: “If Bob bought this toothpaste, then he will buy that toothbrush.” This can look like causation but it’s only correlation. Bob buying toothpaste doesn’t cause him to buy a toothbrush. What causes Bob to buy a toothbrush is a third factor: wanting clean teeth.

Computers, for all their intelligence, cannot grasp this. Judea Pearl, the computer scientist whose groundbreaking work in AI led to the development of Bayesian networks, has chronicled that the if-then brains of computers see no meaningful difference between Bob buying a toothbrush because he bought toothpaste and Bob buying a toothbrush because he wants clean teeth. In the language of the ALU’s transistors, the two equate to the very same thing.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

This inability to perform causal reasoning means that computers cannot do all sorts of stuff that our human brain can. They cannot escape the mathematical present-tense of 2 + 2 is 4 to cogitate in was or will be. They cannot think historically or hatch future schemes to do anything, including take over the world.

And they cannot write literature.

The Objections

Let’s take the objections in turn. There are some folks who object to Judea Pearl’s proof that the computer ALU cannot perform causal reasoning. I’m not sure why they object, but Pearl’s work is eminently solid and speaks for itself.

There are some folks who think that I’m arguing that human brains are magical entities that transcend the laws of mechanics. I’m not arguing that. In fact, I treat human brains as machines. What makes human brains special, compared to a computer, is that their nuts-and-bolts can run more than symbolic logic. They can also run narrative. And they can do so not because of freewill or emotion or soul or consciousness or imagination, but because of the structure of the neuron, which is more complex than an on/off switch or a transistor and has functions that cannot be reduced to logic gates.

And there are some folks who seem to think that literature is itself a species of symbolic logic. That, after all, is what most of us are taught in school, because of the legacy of interpretive paradigms such as New Criticism. But as modern narrative theory has revealed, literature requires narrative to do most of what it does.

Agreement

There are some folks who point out, correctly, that Pearl is at work on a causal calculus, that is, a calculus that can do causal reasoning. My reply is: The calculus works and is brilliant, but it can’t ever be run on a computer ALU. It requires human brains to perform a number of its core functions.

There are some folks who point out, correctly, that computers are only one possible kind of AI and we can imagine a different kind of AI that could run narrative. I agree with that, but such an AI would require a new kind of technology, a post-computer technology that isn’t confined by the limits of electronics. To invent that technology, we’d need to draw on our powers of causal reasoning. And our current best accelerant for those powers is literature.

The Narrowness of Computer Intelligence

There are some folks who seem to think that I’m saying that “Humans are always going to be special” or “Close the patent office—there’s a limit to what tomorrow’s inventors can invent!” I wouldn’t dream of saying either of those things. I’m saying that literature is one of the most powerful technologies that humans have ever invented, but right now, we’re not taking advantage of that extraordinary technology because of the pervasive misconception that computers are universal thinking machines. Yet in fact, computers can only perform one, narrow form of thinking: symbolic logic. Meanwhile, poems and novels and other literature can boost our intelligence in the much wider thought-domain of causal reasoning.

Don’t Confuse Computer Mechanics with Processing

The computer brain (i.e. the Arithmetic Logic Unit, or ALU) is composed of electrons, gates, and other mechanical elements that exist in time and so operate in sequences of cause-and-effect. But that brain can only run the AND-OR-NOT procedures of symbolic logic, which exist in the timeless mathematical present and so cannot crunch cause-and-effect.

In other words, while a computer exists in four dimensions, it can only think in three. It’s a product of causal reasoning but cannot itself produce causal reasons.

The Summary

If you want to go beyond magical thinking and genuinely create the future (including post-computer AI technologies that can plan, imagine, and write novels) the way to do so is to accept the permanent limitations of the computer brain—and to then upgrade your own neural powers by better mastering the mechanics of narrative, which you can do by studying literature.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Literature is a wonderwork of imaginative weird and dynamic variety. But at the bottom of its strange and branching multiplicity is an engine of causal reasoning. The engine we call narrative.

Narrative cranks out chains of this-leads-to-that. Those chains form literature’s story plots and character motives, bringing into being the events of The Iliad and the soliloquies of Hamlet. And those chains also comprise the literary device known as the narrator, which (as narrative theorists from the Chicago School6 onward have shown) generate novelistic style and poetic voice, creating the postmodern flair of “Rashōmon” and the fierce lyricism of I Know Why the Caged Bird Sings.

No matter how nonlogical, irrational, or even madly surreal literature may feel, it hums with narrative logics of cause-and-effect. When Gabriel García Márquez begins One Hundred Years of Solitude with a mind-bending scene of discovering ice, he’s using story to explore the causes of Colombia’s circular history. When William S. Burroughs dishes out delirious syntax in his opioid-memoir Naked Lunch—“his face torn like a broken film of lust and hungers of larval organs stirring”—he’s using style to explore the effects of processing reality through the pistons of a junk-addled mind.

Narrative’s technologies of plot, character, style, and voice are why, as Ramus discerned all those centuries ago, literature can plug into our neurons to accelerate our causal reasonings, empowering Angels in America to propel us into empathy, The Left Hand of Darkness to speed us into imagining alternate worlds, and a single scrap of Nas, “I never sleep, because sleep is the cousin of death,” to catapult us into grasping the anxious mindset of the street.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

None of this narrative think-work can be done by computers, because their AND-OR-NOT logic cannot run sequences of cause-and-effect. And that inability is why no computer will ever pen a short story, no matter how many pages of Annie Proulx or O. Henry are fed into its data banks. Nor will a computer ever author an Emmy-winning television series, no matter how many Fleabag scripts its silicon circuits digest.

The best that computers can do is spit out word soups. Those word soups are syllogistically equivalent to literature. But they’re narratively different. As our brains can instantly discern, the verbal emissions of computers have no literary style or poetic voice. They lack coherent plots or psychologically comprehensible characters. They leave our neurons unmoved.

This isn’t to say that AI is dumb; AI’s rigorous circuitry and prodigious data capacity make it far smarter than us at Aristotelian logic. Nor is it to say that we humans possess some metaphysical creative essence—like freewill—that computers lack. Our brains are also machines, just ones with a different base mechanism.

But it is to say that there’s a dimension—the narrative dimension of time—that exists beyond the ALU’s mathematical present. And our brains, because of the directional arrow of neuronal transmission, can think in that dimension.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Our thoughts in time aren’t necessarily right, good, or true—in fact, strictly speaking, since time lies outside the syllogism’s timeless purview, none of our this-leads-to-that musings qualify as candidates for rightness, goodness, or truth. They exist forever in the realm of the speculative, the counterfactual, and the fictional. But even so, their temporality allows our mortal brain to do things that the superpowered NOR/NAND gates of computers never will. Things like plan, experiment, and dream.

Things like write the world’s worst novels—and the greatest ones, too.

Angus Fletcher is Professor of Story Science at Ohio State’s Project Narrative and the author of Wonderworks: The 25 Most Powerful Inventions in the History of Literature. His peer-reviewed proof that computers cannot read literature was published in January 2021 in the literary journal, Narrative.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Read our interview with Angus Fletcher here.

References

1. Hopkins, J. & Kiela, D. Automatically generating rhythmic verse with neural networks. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics 168-178 (2017).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

2. Marche, S. I enlisted an algorithm to help me write the perfect piece of science fiction. This is our story. Wired (2017).

3. Corcoran, J. Aristotle’s Prior Analytics and Boole’s Laws of Thought. History and Philosophy of Logic 24, 261-288 (2003).

4. Sharratt, P. Nicolaus Nancekius, “Petri Rami Vita.” Hunanistica Lovaniensia 24, 161-277 (1975).

5. France, A. The Lawiers Logike William How, London, U.K. (1588).

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

6. Phelan, J. The Chicago School. In Grishakova, M. & Salupere, S. (Eds.) Theoretical Schools and Circles in the Twentieth Century Humanities Routledge, New York, NY (2015).

Lead image: maxuser / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.