Mattel’s AI nanny, called Aristotle, recently gained the notorious distinction of being subject to a bipartisan protest in the US Congress. Plus, there was a petition against it with over 15,000 signatures. The Campaign for a Commercial-Free Childhood, which organized the petition, argued that Aristotle is a consumerist ploy. It “attempts to replace the care, judgment and companionship of loving family members with faux nurturing and conversation from a robot designed to sell products and build brand loyalty.”
Aristotle, designed to interact with kids, was based on the same technologies as virtual assistants such as Amazon’s Alexa. It could teach lessons, and calm them. The Guardian suggested that it might “reinforce good manners in kids, and even help kids learn a foreign language.” But it could also watch and listen to them. Aristotle’s Bluetooth camera, embedded in a cylindrical speaker-body, was poised to gaze unblinkingly in novel territory: a child’s bedroom. The company eventually conceded that Aristotle “did not fully align with Mattel’s new technology strategy.” Aristotle soon became Socrates, as it were, forced to drink hemlock at the hands of an enraged mob.
Yet canning Aristotle has at least one unfortunate consequence: We close off one channel of learning how kids grow up interacting with an AI as it becomes increasingly integrated in both schools and homes. The fact is that children interact with AI quite differently than adults do. Recently, the MIT Media Lab began studying how children between the ages of three and ten interact with AI in a monitored setting. This included Amazon Alexa and Google Home, digital assistants in cylindrical speakers, Julie Chatbot, an Android phone app that tells jokes and plays games with its interlocutor, and Cozmo, a wheeled robot that recognizes faces while showing artificial emotions on its own face.
Last year at a conference at Stanford University, the researchers began sharing some interesting results. For instance, children seem to overestimate the intelligence of the AIs they interacted with. Surprisingly, most of the older children thought the AIs might be smarter than themselves, whereas the children younger than six weren’t as sure. The researchers also found that children judge the tone and prosody of an AI’s synthesized speech differently than adults do. One of the researchers, Randi Williams, discovered that children seem to prefer Alexa’s voice for its “energy,” while an adult like Williams might prefer Google Home’s voice for sounding more human-like. “In future work,” the researchers concluded, “we hope to design interactions where children are able to tinker with and program the agents and thus expand their perception of their own intelligence and different ways to develop it.”
But it may be negligent, given how impressionable children are, to leave them alone, unsupervised, with an AI. A recent study found that children ages seven to nine are more likely than adults to conform to the behavior of Nao, a small humanoid robot. The researchers—from the UK, Germany, and Belgium—found that children often fall in line with the judgements of robots tasked with matching one of several lines to a reference line of the same length, even when the robot judgements are clearly wrong. Another study found that robots can “influence children to change their judgments about moral transgressions,” like whether it is okay to tease or hit other kids. Without a supervising adult, children might learn inappropriate behaviors from AI, which is why Stefania Druga, a graduate student in the Personal Robots Group at MIT Media Lab, told The Globe and Mail that parents are crucial intermediaries between children and AI.
Parents can also rely on AIs as distractions from distractions. For example, Amy Blake, an Ontario parent of two young children, has seen a reduction in her children’s screen time thanks to Google Home. The virtual assistant plays music and reads stories like Rapunzel to her children, replacing hours of time that might otherwise be squandered staring at what a TED Talk called the “nightmare videos of children’s YouTube.”
Williams wants more from child-oriented AI—a personalized tutor, for starters. In June, on a Future of Life Institute podcast, she said, “We found educational research that says that your vocabulary at the age of five, is a direct predictor of your PSAT score in the 11th grade. And as we all know, your PSAT score is a predictor of your SAT score. Your SAT score is a predictor of your future income, and potential in life, and all these great things.” AI can not only help expose children to a richer vocabulary in engaging conversations, but also encourage curiosity. “We think about how the personality of the robot is shaping the child as a learner. So, how is the robot teaching the child to have a growth mindset, and teaching them to persevere, to continue learning better? Those are the kinds of things that we want to instill, and AI can do that.”
Joel Frohlich is a postdoctoral researcher studying consciousness in the laboratory of Martin Monti at the University of California, Los Angeles. He received his PhD in neuroscience in the laboratory of Shafali Jeste at UCLA while studying biomarkers of neurodevelopmental disorders. He is the editor in chief of the science-communication website “Knowing Neurons.”