British author and technology pioneer Kevin Ashton has been puzzling over the nature of storytelling for the past 25 years. That’s how long it took him to research and write his latest book, The Story of Stories: The Million-Year History of a Uniquely Human Art.
The first seed of the book for Ashton lay in two seemingly contradictory questions posed by American philosopher and linguist Noam Chomsky. The first, known as Plato’s problem, asks how we can know so much with so little information. Babies, for instance, learn to speak based on what might seem like a poverty of inputs. The second question is known as Orwell’s problem, and it asks the opposite: How could we know so little, given that so much information is available to us?
Ashton—best known for coining the term “The Internet of Things” in 1999, to describe the rise of a whole economy of sensors and other objects connected to the World Wide Web—also began asking himself how the rise of the smartphone might transform the human relationship to storytelling and to the world. “By the mid 2010s, I could be pretty confident that by 2026, some 9 out of 10 people in the world would have a smartphone, and I wanted to know what that might mean,” he recently told me. “The smartphone was an incremental step in the developed world, but in the developing world, it was everything at once.” In the developing world, most people had skipped over radio, television, personal computers.
Ashton knew a revolution was coming. But to grasp what that revolution would look like required him to go back and understand the entire evolution of storytelling across human history—which was initially just a footnote in his research.
I recently spoke with Ashton about why cell phones are so revolutionary in the long history of storytelling technologies, why social media might not be as terrible for young people as some believe, why long-form narratives aren’t dead, and why he’s still hopeful about our newest storytelling technologies.
You divide The Story of Stories into two parts: the first act, which is a million years long and comes to its end with the smartphone, and then everything after that. What is so fundamentally different about the smartphone from earlier storytelling technology?
A lot of people are like, “New technology comes along, and kids can’t understand stories anymore. Kids can’t read, nobody talks, bad things happen, words change, and nobody’s got any attention.” And that didn’t stand up to research very well. But what I did realize was that these major new technologies, each change the scale of storytelling: How many people can tell stories, and how many people they can tell stories to. That started to look really interesting. I was beginning to realize that big new storytelling technology generally leads to big new revolutions.
Of course, one of the early ones is printing. We didn’t all read happily ever after because of printing. There were like 50 or so wars between Protestants and Catholics over whose story was right, and 12 million people were killed. That’s an example of the kind of revolution that happens when new stories become more broadly available. The smartphone really feels like the end of that arc, because now anybody can tell a story to anybody. There is someone in Mongolia right now using Facebook, and if they publish something viral enough and interesting enough that catches enough attention, it’s five shares away from being something everybody sees.
You write in the book that storytelling is uniquely human. Do we know for sure that other species don’t tell stories?
You don’t really see any symbolic behavior in other species. All species communicate, but very few species communicate through visual means. Crows do a little bit of pointing. Dogs can understand humans pointing. But wolves don’t use pointing in the wild. They will mark the ground and use urine for signaling behavior, most of which is olfactory. But what you don’t get is any rigid system where a scratch like this means one thing, or a scratch like that means another thing. And vocalizations are primarily calls and cries that convey warning or attraction. A lot of the information in those sounds is how big is the person making the call or the cry? How old or young is the person making the call or the cry? So there’s nothing remotely like storytelling or story comprehension in any species that we’ve ever studied or discovered.
Humans started telling stories when we sat around the fires. We were primates who wanted to socialize. We couldn’t see gestures. We started making sounds. The sounds we had were, “Look over there,” and “Oh my god, run.” And those sounds were actually very useful sitting around the fire. What you want to talk about around the fire is stuff that’s not there. Maybe it’s about tomorrow or yesterday or something you remember, or something you imagine or something you desire. Over a long period of time, hundreds of hundreds of thousands of years, those sounds start to evolve into something which becomes language. And the reason they evolved into language was so that we could have these conversations about things not present, which is storytelling.
You argue that a fundamental purpose of stories is to distribute glory and shame, in the form of heroes and villains. But literary critics might argue that good stories don’t have clear-cut heroes and villains. They have antiheroes. They have gray areas rather than certainties.
We have to distinguish between stories that tend to be long lasting and successful when told to large audiences—and ones that are not. In successful stories, the antiheroes are still heroes. Batman still saves Gotham City. He just does it wearing black. An antihero isn’t a villain. And there are no anti-villains. The antihero exists as a reaction to the heroic archetype, the pure goody-two-shoes heroes that were in earlier stories. The tweedy literary people in their Brooklyn brownstones who try to write stories where it’s very ambiguous who’s the good guy or the bad guy—it’s all a bit muddled, but there’s still someone you’re supposed to be rooting for. There’s still someone the author identifies with. You cannot tell a story that anyone will enjoy if there’s absolutely nobody doing anything virtuous at any stage. That wouldn’t be a compelling story. But really, the more emotion a story evokes, the better the story. Different things evoke different emotions in different people. But these more experimental white guy books that everyone pretends they read where nothing ever happens …
Like which ones?
I’m not going to name any names! But if you’re not evoking an emotion, you aren’t going to find a lot of readers. A lot of people who want to be high-art storytellers will experiment: “Well, what if they take out these elements? What am I left with? How does it work?” My answer is generally it’s an intellectually interesting exercise that I don’t want to return to. Depending on what kind of mood I’m in, I sometimes have some very salty conversations with literary critics.
Read more: “We Can Be Heroes”
If storytelling has been so utterly transformed by these new technologies, why do the earliest forms of storytelling stick around? People are constantly saying, poetry is dead, novels are dead, but they aren’t dead. They don’t go away even though we keep getting new storytelling technologies. Why do you think that is?
The real deep answer is we’re exactly the same people with exactly the same brains and behaviors that we were 100,000 years ago or more when storytelling first evolved. The things that appeal to us about stories today are the things that appealed to our ancestors. That hasn’t changed. The hard-wiring is the same. And more people can read than ever before. More novels are being sold than ever before.
I’ve been talking about this a long time because I get really tired of this old post-literate world thing. Marshall McLuhan was declaring the world post-literate when only 40 percent of people could read. Give me a break. We live in a world right now where there’s been a democratization of reading, an egalitarianism of reading. People who like romance and fantasy books are writing their own romance and fantasy books and they’re self-publishing them. And some of them get the attention of traditional publishers and become very successful.
I’m not generally very welcome on panel discussions, but you get, “The kids these days, they have no attention spans.” And: “The kids these days, they’re always looking at their phones.” And I’m like, “Well, hang on a minute. Both of those things can’t be true.” Either they have no attention or they can’t stop looking at their phones, by which you mean paying a lot of attention to their phones. What’s on their phones is words, most of the time, even if you go look at some dumb TikTok video, they put words on top of things. There are captions that help it make more sense when they’re communicating with one another. They’re sending text messages. Children today are writing more words than you or I did when we were teenagers.
The other day I was talking to an educator, and they asked, “What do you think about AI? It’s writing all the essays.” My reply is, “I think you should stop assigning people essays.” Why has nobody come up with this idea? Tell the students, “I want you to do the reading, and then you and I are going to sit down for five minutes, one-on-one, and we’re going to talk about it.” That solves the whole freaking problem.
But if our brains haven’t changed since we first started writing down and consuming stories, wouldn’t it be a good thing to continue to write essays? Evidence suggests writing is such an important part of the thinking process.
Writing is just a technology of story. It’s one of the earliest technologies of story. And older people always hold the things that they did when they were kids in higher regard. I’m a writer. I write books. I love writing. I can talk for days about why writing is good and why books are good, but are they better than everything else? That’s an unchallenged assumption based on the fact that it’s old and not based on the fact that it’s better.
The standard academic essay is an example of what Paulo Freire called banking education. The teacher deposits a question; the student retrieves content, formats it per conventions, returns it for grading. The product is assessed, not the thinking that was supposed to happen in the middle. What the essay actually measures is socioeconomic class and family income. Essay content and style correlate more strongly with household income than even SAT scores. Higher-income students deploy abstract reflection, complex syntax, and so on, not because they think more clearly, but because those conventions are part of their linguistic inheritance. Lower-income students write differently, not worse, but get marked down. And here’s the kicker: Rich kids have always been able to pay tutors, writing coaches, and consultants to help them write essays. AI has simply made that service free and universal. The scandal isn’t that students aren’t writing their own essays. The scandal is that we’re only worrying about the problem now that the cheat is available to everyone.
What about long-form versus very short-form storytelling? Can a 5-second post on a social media app really sustain attention or require you to think about ideas in the way that a novel or a nonfiction book would?
You can get equally enthralled by a short story and a 10-book series. Martin Luther’s 95 Theses was this one-page document. The first viral meme broke the world’s greatest power at the time—the Roman Catholic Church—in two. It really isn’t how you say it, it’s what you say. If you’re going to write long-form, you have to do it well. If you’re going to write short-form, you have to do it well. All of that stuff seems values-neutral to me.
But also, social media content isn’t always short-form. A teenager spending three hours on social media might be watching long-form YouTube essays, reading Reddit threads, participating in BookTok, or creating content. Collapsing all of that into a single variable and drawing conclusions about format isn’t justified. The most popular YouTube creators built massive audiences on long-form content. PewDiePie—110 million subscribers, nearly 30 billion total views—averages 28 minutes per video, more than double the platform average. Penguinz0, who has 17.5 million subscribers and 12 billion views, averages 27 to 60 minutes per video depending on measurement window. The generation supposedly incapable of sustained attention built two of YouTube's largest channels on content running 30-60 minutes per video.
And long-form reading is booming. United States young adult print sales went from approximately 23 million copies in 2018 when TikTok launched to a record 35 million in 2022, a 52-percent increase. Sales in 2024 remain 31 percent above 2018 levels. The primary driver of that growth, according to Circana BookScan, was TikTok. Those 30 million annual copies average roughly 70,000 words each, approximately 2 trillion words, of long-form reading per year in a single book category, from a generation supposedly incapable of sustained attention. That’s about the same number of words per capita as any other age group. Americans aged 11-18 read about one novel a year on average. So do Americans over 19.
Read more: “Our Brains Tell Stories So We Can Live”
What about recent studies that suggest kids’ social media use is linked to lower memory, vocabulary, and reading scores?
The claim that social media is measurably harming cognition isn’t supported by the evidence. The one genuinely controlled experimental result is a 2023 study, which found TikTok degraded prospective memory. Specifically, the ability to remember to execute a planned intention—in a between-subjects design—while Twitter, YouTube, and a no-activity control did not. This is a real finding. But it measures one narrow cognitive function under artificial lab conditions, not, say, reading, vocabulary, critical thinking, or abstract reasoning.
Assessments like reading scores don’t measure things like narrative construction, persuasive communication, editing judgment, or audience awareness, all of which content creation develops. Participation matters. TikTok follows the 90-9-1 pattern common to all interactive media. One percent create, 9 percent interact and the rest read, watch, or whatever. But on a platform with 150 million U.S. users, even 1 percent is 1.5 million American content producers. And the 9 percent who comment, stitch, and duet are doing something cognitively active.
Research from University of Oxford experimental psychologists Amy Orben and Andrew Przybylski suggests technology use explains only around 0.4 percent of variation in adolescent well-being. The concern about bedtime screens, often treated as established fact, wasn’t supported when measured properly. Cognitive psychologist Lan Nguyen and colleagues reviewed some 100,000 participants and found a moderate correlation between short-form video and poorer attentional performance, but the causal direction isn’t proven: Children with pre-existing attention difficulties may gravitate toward high-stimulation short-form content, producing the observed correlation without any platform effect.
You write that critical literacy—the ability to look at the context of a story, to ask follow-up questions, to recognize that everybody tells you something with an agenda, is the only way to protect yourself from manipulation today. Is anyone successfully teaching critical literacy?
The way I conclude the book is, “No one is coming to save us.” We ourselves have to get more humble, more experienced, recognize our own cognitive biases, recognize when we’re mad about something because we forgot to eat breakfast, and actually understand that we see the world in stories. People often think, “What he’s saying to me is, ‘I’m already a good critical thinker, but I’ve gotta help the other people.’” But no, I’m saying “I, Kevin, have to get better at it. And you, Kristen, have to get better at it.” One of my favorite cognitive biases is bias blindness: People who know there are cognitive biases, but are absolutely convinced these biases don’t apply to them.
It seems like you’re hopeful, though, that this new era of storytelling can bring about progress of some kind.
It already has. I have a nice little chart that I show when I talk about the book. Even today, about 2 to 3 percent of the silent generation will identify as gay, lesbian, bisexual, or trans. It’s about the same for the Boomer generation, and it’s a little bit more for Generation X. But for millennials, it’s about 15 percent, and for Gen Z, it’s about 25 percent. A lot of that has roots in the Internet becoming a place where people could find one another and build community and learn to come out. You see supportive groups forming that allow people to be themselves.
The trans revolution, a historic movement that we’re now living through, is in many ways a result of the Internet and digital photography allowing people to tell their stories more loudly and more clearly than they could before. And a lot of the horrible things in the world are backlash against that. We look at this horrible Epstein situation and it’s all terrible, but the fact of the matter is that in the 1950s, that just would’ve been no big deal. We see a lot of progress. Particularly right now, we can rightly and reasonably get very focused on the backlash to the progress, but they can’t reverse it all the way.
I can absolutely guarantee you that the Supreme Court will not reverse the miscegenation laws that prevented Black and white people from getting married in the late 1960s, because Clarence Thomas is a Black man married to a white woman. There are a lot of horrible, bloody, brutal things that happen because we made progress. And some of them push us back a little way, but they never push us back all the way. ![]()
Enjoying Nautilus? Subscribe to our free newsletter.
Lead image: Ethan653 / Shutterstock






