Humanity has long feared the corrupting influence of new technologies. This truism is as ancient as Socrates—as old as writing itself.
“This discovery of yours will create forgetfulness in the learners’ souls, because they will not use their memories,” Socrates warned in 360 B.C., one of the earliest known criticisms of writing—yet it’s only preserved today because Plato jotted it down.
More than 2,000 years later, we can easily swap the ancient philosopher’s critique of writing for the collective agita surrounding new-fangled technologies that have stoked fears of cognitive decline. The printing press, the television, the internet, social media. Most recently, generative artificial intelligence tools like ChatGPT have rekindled this millennia-old panic.
Technologies we’ve used for decades, such as Google search and navigation apps, have also prompted warnings of disruptions to cognitive processes, such as memory and spatial orientation. But some researchers think generative AI, which has only become widely accessible over the past few years, could cause unprecedented damage in the long run. That’s because our chats with chatbots can feel like genuine conversations with fellow humans whom we trust—and it can be tricky to verify what these tools tell us.
Overreliance on chatbots could degrade skills essential to people’s professions.
This is particularly concerning because chatbots are increasingly spitting out misinformation, but in the confines of convincing, human-like interactions. AI chatbots “are not just repositories of information; they simulate human conversation, adapt to user inputs, and can provide personalized responses,” according to a 2024 paper published in Frontiers in Psychology. “This dynamic interaction could lead to a different kind of cognitive reliance compared to static information sources.”
The power we give to chatbots is “like the power we give to other people,” Olivia Guest, a computational cognitive scientist at Radboud University in the Netherlands, says. “That’s what’s very scary.”
To get a better grasp on generative AI’s impacts on our brains, small studies over the past few years have collected both quantitative and qualitative data like brain scans, survey results, and performance on various tasks. Such research has suggested that using chatbots and other generative AI tools may, at least over the short-term, worsen problem-solving skills, provoke mental ”laziness,” and harm our ability to learn, among other impacts.
Overreliance on chatbots could even degrade skills essential to people’s professions, says Iris van Rooij, a computational cognitive scientist also at Radboud University.
By depending on chatbots to read, write, code, or perform other critical work for us, we lose out on opportunities to practice these essential functions, she says. And when our expertise erodes, it becomes harder to spot errors made by chatbots that would be indistinguishable to non-experts—prompting a “downward spiral.”
But it’s important to take recent findings regarding the dangers of chatbot use with a grain of salt, says Sam Gilbert, a cognitive neuroscientist at University College London. He contends that it would be tricky to conduct “proper” controlled experiments to definitively link the regular use of any widely adopted technology like AI chatbots with long-term detrimental effects on our minds—this would require comparing people who have had sustained exposure to specific technologies to those who haven’t used them at all.
Finding an unexposed comparison group for chatbots at this stage would be difficult, and because the chatbots have only been widely accessible for a few years, it’s too soon to measure long-term effects. Plus, Gilbert says, it would be “unethical” to deny people access to any technology in a long-term randomized trial.
Chatbots are increasingly spitting out misinformation.
Gilbert studies the concept of “cognitive offloading,” the process of easing one’s mental strain with the help of external aides, be it a pen and paper or a chatbot. Transferring information from our minds to a screen isn’t necessarily harmful, Gilbert has found in his research, and can even free up space in our brains for other information.
Ultimately, alarmist claims that we’re vulnerable to “digital dementia,” a concept popularized in 2012 that ties overreliance on technology to cognitive decline, are supported by “extremely weak evidence,” Gilbert says, due to the lack of controlled experiments. Meanwhile, some studies that have followed older adults for up to two decades have found that use of digital technologies that preceded chatbots is actually associated with lower risk of cognitive impairment.
Gilbert also cautions that realtime shifts in cognitive activity gleaned from brain scans of subjects actively using generative AI don’t necessarily point to long-term perils, but rather momentary changes during a specific task. “It just tells us about how people are using their brains to approach one particular challenge,” Gilbert says. “We need to be very careful about how we interpret that evidence, and there is certainly no neural evidence that I know of to suggest that technology is harming our overall cognitive skills.”
Still, we should check whether AI tools actually provide us better content than our own noggins could when, say, writing an essay or drafting a work proposal, he says. Gilbert recommends taking stock of your mental toolbox—a process known as metacognition, or thinking about how you think. It’s important to get a good handle on your abilities, such as writing or memory capacity, before you lean too far into outsourcing certain tasks to a chatbot or any other digital resource. It goes both ways, Gilbert explains: Someone who’s overconfident in their memory skills, for example, could forgo digital reminders and forget to take their medication.
“I don’t think people should completely avoid offloading,” Gilbert says. “I do think it’s important to get a good handle on your own abilities without a tool—and the success with that tool—to find out whether it truly helps you.”
Across academic fields, views on AI use more broadly diverge sharply. While some researchers hold that responsible applications of AI tools like chatbots can complement human smarts, both Guest and van Rooij take a different view: They say that chatbots, in their current forms, fail to offer any tangible benefit due to their technical shortcomings and are “actively harmful,” according to Guest. Along with researchers from Europe and the United States, Guest and van Rooij recently urged against “the Uncritical Adoption of ‘AI’ Technologies in Academia.” They wrote:
“We can and should reject that AI output is ‘good enough,’ not only because it is not good, but also because there is inherent value in thinking for ourselves. We cannot all produce poems at the quality of a professional poet, and maybe for a complete novice an LLM output will seem ‘better’ than ones’ own attempt. But perhaps that is what being human is: learning something new and sticking with it, even if we do not become world-famous poets.” ![]()
Enjoying Nautilus? Subscribe to our free newsletter.
Lead image: ollagery / Shutterstock
