ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. or Join now .
Sign up for the free Nautilus newsletter:
science and culture for people who love beautiful writing.
NL – Article speedbump

In 2016, Alan Winfield gave an “IdeasLab” talk at the World Economic Forum about building ethical robots. “Could we build a moral machine?” Winfield asked his audience. Behind him, pictured on a flatscreen TV, was one of the bots Winfield used in his experiments—a short, cutesy, white and blue human-like machine. Just a few years ago, he said, he believed it to be impossible: You couldn’t build a robot capable of acting on the basis of ethical rules. But that was before he realized what you could get robots to do if they had an imagination—or less grandiosely a “consequence engine,” a simulated internal model of itself and the world outside.

Nautilus Members enjoy an ad-free experience. Log in or Join now .

Winfield showed clips of his experiments at the Bristol Robotics Lab in England. In one, a blue robot saved a red robot from walking into a “danger zone” by (gently) colliding with it. In another, two red robots were heading for danger zones and the blue robot can only save one—an ethical dilemma that endearingly caused it to dither between the two. “The robot behaves ethically not because it chooses to but because it’s programmed to do so,” Winfield said. “We call it an ethical zombie.” Its reasoning was completely transparent. “If something goes wrong, we can replay what the robot was thinking.” Winfield believes this will be crucial for the future. “Autonomous robots will need the equivalent of a flight-data recorder in an aircraft—an ethical black box.” This ethical black box, Winfield believes, would allow us to understand the “what if” questions the robot was asking itself.

THE ROBOT MEME KING: Alan Winfield sees robots as a metaphorical microscope that allows him to closely study the nature of evolution, culture, intelligence, and imagination. Photo courtesy of Alan F.T. Winfield.

Winfield is a professor of robot ethics at the University of the West of England, in Bristol. He co-founded the Bristol Robotics Lab, the largest such lab in the United Kingdom, and is the author of Robotics: A Very Short Introduction. He specializes in cognitive robotics—how machines can think and imagine. At a Google talk, also in 2016, Winfield explained how robots might achieve their own theory of mind and self-awareness. More recently, with the psychologist Susan Blackmore, author of Consciousness: A Very Short Introduction, Winfield has been taking his work on robot imagination further, into the realm of what we might call robot culture. The researchers are planning experiments with “Storybots,” robots that can tell each other stories. They say it’s a new way of understanding the human capacity for culture, and how our ideas spread and change.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

“Storybots experience imperfect communication, so the stories will mutate as they are told and re-told,” Winfield and Blackmore wrote in a 2021 paper, published in the Philosophical Transactions of the Royal Society B. The consequence engine “opens the possibility that we can replay and visualize any episode in a robot’s ‘imagination.’” It allows them to inspect how the robots see their own stories as they pass from robot to robot. The parallels to our culture are startling. “If we introduce new robots into the group at different times, we might see the emergence of an ‘elder’ storyteller robot that is accorded a prestige bias,” Winfield and Blackmore wrote.

In a recent Zoom conversation with Nautilus, Winfield noted he’s a bit of an unusual roboticist. “I’m not really all that interested in robots for their real world utility,” he said. “I’m a scientist engineer. For me, a robot is a microscope for studying interesting questions around intelligence, evolution, life, and culture.” It was a joy picking Winfield’s brain about his work and his views on consciousness. Because a major focus of his is robot ethics, I had to ask him what he makes of the way Tesla uses consumers to train its self-driving technology.

Elon Musk has described Tesla as the largest robotics company because their cars are essentially robots on wheels. What do you make of Tesla’s efforts to achieve autonomous driving?

There’s no doubt they make very nice motor cars. I’m much more skeptical about the autopilot technology. We rely on the manufacturers’ assurances that they’re safe. I do quite a lot of work with both the British Standards Institute and the Institute of Electrical and Electronics Engineers Standards Association. The standards have not yet been written for driverless car autopilot. If you don’t have standards, it’s quite hard to test for the safety of such a system. For that reason, I’m very critical of the fact that you can essentially download the autopilot at your own risk. If you’re not paying attention and the autopilot fails, you may, if you’re very unlucky, pay with your life.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I think we should be able to make artificially conscious machines.

Have you spoken with any Tesla owners?

I know several people who have Teslas. Several years ago, I was discussing with one of them how very lucky he was to be paying attention when something happened on the motorway in England and he had to make an evasive maneuver to avoid a serious crash. That’s the paradox of driverless vehicles—insurance companies require drivers to be alert and paying attention, yet the amount of time that they’ve got to react is unreasonably short. Autonomous vehicles only make sense when they are sufficiently advanced and sophisticated that you don’t even have a steering wheel. And that’s a long way into the future. A long way.

How do you view the way Tesla trains its autopilot technology?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

They’re using human beings essentially as test subjects as part of their development program. And other road users are, in a sense, unwittingly part of the process. It sounds reasonable in principle, but I think the safety implications are very unwise.

Fair enough. Tell us how you got interested in experimenting with robot culture.

My friend and coauthor Susan Blackmore wrote a book some years ago called The Meme Machine. You’re familiar with the idea of memes. Meme was suggested by Richard Dawkins in his even more famous book called The Selfish Gene, where he defined a meme as a unit of cultural transmission, as a cultural analog, if you like, for the gene. Hence the similarity between the two words. But memes are quite hard to pin down in the sense that a gene typically has some coding associated with it as part of the DNA. That’s one of the criticisms of memetics. But let’s put those criticisms aside. The fact is that ideas and behaviors spread in human culture and in fact, in animal culture, by imitation. Imitation is a fundamental mechanism for the spread of behaviors. Humans are by far the best imitators of all the animals that we know. We seem to be born with the ability to imitate as infants. What we are interested in doing is modeling that process of behavioral imitation.

I believe we’re the first to model cultural evolution with real physical robots.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

You started by creating what you call copybots. I love that the idea for them was once just a thought experiment Blackmore came up with.

Yes. We were able to build the copybots for real, with physical robots. They’re small, slightly larger than a salt shaker, but they’re sophisticated. Each one had a Linux computer with WiFi. It can see with a camera. It has something like a sense of touch by virtue of a ring of eight infrared sensors. We seeded some of the robots with a dance. The pattern of movement would describe a triangle or a square and other robots would observe that movement pattern with their own camera. Imitation was embodied. We don’t allow telepathy between robots, even though it’d be perfectly easy to arrange for that. It’s a process of inference, like watching your dance teacher and trying to imitate their moves.

What is significant about the copybots’ ability to imitate one another?

The fundamentally important part of our work is that the robots, even though they’re in a relatively clean and uncluttered environment, still imitate badly. The fidelity of imitation tends to vary wildly, even in a single experiment. That allows you to see the emergence, the evolution, of new variations on those behaviors. New dances tend to emerge as a result of that less-than-perfect fidelity. The wonderful thing about these real physical robots is that you get the noisy imitation for free.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What are some signs you see that imitation can lead to the emergence of culture?

We see heredity because robot behaviors have parent and grandparent behaviors. You also have selection. If your memory has, say, 10 dances in it, and five of them are very similar, but the other five are very different, you are more likely to choose one of the five that are similar, because they’re more dominant. If you choose randomly with equal probability, you are more likely to choose one of those dominant dances. So, you see the emergence of simple traditions, if you like, and a new dance emerges. It becomes dominant in the collective memories of all of the robots. That really is evidence of the emergence of artificial traditions—i.e., culture. It’s a demonstration that these very simple robots can model something of profound importance.

MAY I HAVE THIS DANCE?: In an arena at the Bristol Robotics Lab, copybots show one another new dances to imitate. Eventually, new dance traditions can emerge as certain movements evolve to be more easily shareable between robots. Their red skirts make it easier for robots to see each other, and their yellow hats allow the tracking system to identify and follow each robot. Photo courtesy of Alan F.T. Winfield and Susan Blackmore.

What else did you find experimenting with copybots?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

We found that the memes that emerge over time, the dances, evolve to be easier to transmit. They evolve to match the physiology, the sensorium, of the robots. I believe we’re the first to model cultural evolution with real physical robots.

The storytelling robots you’re working with take imitation and cultural communication to the next level. Can you tell us about that?

It was only recently, in the last couple of years, that Sue Blackmore and I realized that we could extend the story of artificial culture, the work of the copybots, to the storybots, where the storybots would be literally telling each other stories. That’s the next step. We are very excited by that. We would have had some results if it were not for the pandemic, which closed the lab for the best part of a year or more. They build on another thread of work that I’ve been doing for around five or six years, working on robots with a simulation of themselves inside themselves. It’s technically difficult to do, especially if you want to run the robots in real time and update the robots’ behavior on the basis of what the robot imagines.

How does robot imagination relate to storytelling?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

It is in a sense still the imitation of behavior, but the imitation of behavior through a much more sophisticated mechanism, which is, you tell me a story and I then repeat that story, but I repeat it after I’d re-imagined it and reinterpreted it as if it were my own imagination. That’s exactly what happens with storytelling, particularly oral storytelling. If you tell your daughter a story and she tells it back to you, it’s probably going to change. It’s probably going to be a slightly different story. The listener robot will be hearing a speech sequence from another robot with its microphones and then re-imagining that in its own, inbuilt functional imagination. But because oral transmission is noisy, we are probably going to get the thing that happens with a game of telephone. Language is an extraordinarily powerful medium of cultural transmission. Being able to model that would really take us a huge step forward.

A robot is a fantastic microscope for studying questions in the life sciences.

Do you one day want to see humanoid robots having their own culture?

This is purely a science project. I’m not particularly interested in literally making robots that have a culture. This is simply modeling interesting questions about the emergence of culture in animals and humans. I don’t deny that, at some future time, robots might have some emergent culture. You could imagine some future generation of robot anthropologists studying this, trying to make sense of it.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

What makes robots such a useful tool in understanding ourselves?

Robots have physical bodies like we do. Robots see the world from their own first-person perspective. And their perception of the world that they find themselves in is flawed, imperfect. So there are a sufficient number of similarities that the model, in a sense, is plausible—providing, of course, you don’t ask questions that are way beyond the capabilities of the robots. Designing experiments, and coming up with research hypotheses that can be reasonably tested, given the limitations of robots, is part of the fun of this work.

Do you think robots can be built with consciousness, or is it something unique to biological beings like us?

Although it’s deeply mysterious and puzzling, I don’t think there’s anything magical about consciousness. I certainly don’t agree with those who think there is some unique stuff required for consciousness to emerge. I’m a materialist. We humans are made of physical stuff and we apparently are conscious, and so are many animals. That’s why I think we should be able to make artificially conscious machines. I’d like to think that the work we’re doing on simulation-based internal models in robots and in artificial theory of mind is a step in the direction of machine consciousness.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Are you worried that we might stumble into creating robots that can suffer, that can feel their own wants and desires are being ignored or thwarted?

I do have those worries. In fact, a German philosopher friend of mine, Thomas Metzinger, has argued that, as responsible roboticists, we should worry. One of the arguments that Thomas makes is that the AI might be suffering without you actually being aware that it’s suffering at all.

AI are moral subjects, but only in the limited sense that I don’t believe that animals should suffer. Animal cruelty is definitely something that we should absolutely stop and avoid. For the same reason, if and when we build more sophisticated machines, I think it’s appropriate not to be cruel to those, too.

Do you think that robots and AI will be key in understanding consciousness?

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

I think it would. You may know the quote of Richard Feynman who said if I can’t build it, I don’t understand it. I’m very committed to what’s called the synthetic method, essentially doing science by modeling things. A robot is a fantastic microscope for studying questions in the life sciences.

How would we know that we had built a conscious machine?

I remember asking an old friend of mine, Owen Holland, who I think was one of the people who had the first grant ever in the UK, if not in the world, to investigate machine consciousness, “Well, how will you know if you’ve built it?” His answer was, “Well, we don’t, but we might learn something interesting on the way.” That’s always true.

Brian Gallagher is an associate editor at Nautilus. Follow him on Twitter @bsgallagher.

ADVERTISEMENT
Nautilus Members enjoy an ad-free experience. Log in or Join now .

Lead art: Andrey Suslov / Shutterstock

close-icon Enjoy unlimited Nautilus articles, ad-free, for less than $5/month. Join now

! There is not an active subscription associated with that email address.

Subscribe to continue reading.

You’ve read your 2 free articles this month. Access unlimited ad-free stories, including this one, by becoming a Nautilus member.

! There is not an active subscription associated with that email address.

This is your last free article.

Don’t limit your curiosity. Access unlimited ad-free stories like this one, and support independent journalism, by becoming a Nautilus member.