Can a robot be creative? Advances in cloud robotics—machines connected to supercomputers in the cloud—have given self-driving cars, surgical robots, and other “smart” devices tremendous powers of computation. But can a robot, even one supercharged with artificial intelligence, be creative? Will a mechanical Picasso paint among us?
Ken Goldberg is the ideal person to ask. For one thing, when he was getting his Ph.D. in computer science at Carnegie Mellon University, Goldberg built a robot that painted. For another, Goldberg, 53, is a computer engineer, roboticist, and artist himself. He grew up in Bethlehem, Pennsylvania, where he forged his creative path. “I was an outsider, at odds with what other kids were doing, and became very interested in art,” he says.
Today Goldberg is Professor of Industrial Engineering and Operations at the University of California, Berkeley, where he also directs a lab on automation sciences, a center for medical robots, an initiative on data and democracy, and a center for new media. He’s published more than 150 peer-reviewed papers on topics such as automation algorithms and his artwork has been exhibited at the Pompidou Center, Whitney Biennial, and Berkeley Art Museum.
Goldberg has strong views on creativity and how it differs in computers and people. His energy and intellect are infectious as his mind races from one idea to another. Our conversation ranged over his own projects and heroes, from gothic literature to Google Glass, Freud to philosopher Hubert Dreyfus. We spoke at his UC Berkeley lab and at a restaurant in Mill Valley, California, near his home, where he lives with his wife, Tiffany Shlain, a filmmaker and the founder of the Webby Awards, and their two daughters, Odessa and Blooma.
What’s been your most creative moment in science?
I spent a summer in graduate school trying to find the mathematical proof of completeness for an algorithm I had written to orient polygonal objects. I lived alone and every day I would write out ideas. To keep my sanity I made paintings of the rickety old stone stairs in the alley outside of my apartment. I woke up one morning and realized I could prove it using a step function. It was a true Aha! moment. The proof has been cited over 400 times.
Einstein talked about how the greatest scientists are also artists. For him all great achievements in science must start from intuitive knowledge.
Agreed. Intuition is a hunch, sensing there’s an opportunity—how to set up the problem. As an artist it’s finding the right idea or concept. Making an opera about Klinghoffer, for example—that’s not an obvious subject for composer John Adams to have come up with. In both science and art, one must rely on a gut feeling about which direction to go.
Tell us about your painting robot.
I liked the idea of a robot being able to demonstrate that a machine can go through the motions of painting but can’t capture the eloquence, the subtlety, the nuance of a human painter. What also fascinated me was how people responded to the robot. The performative aspect of a moving robot was very hypnotic and fascinating to them.
Sounds like that laid the groundwork for your “Telegarden.” Tell us about that.
In 1993 I was teaching at USC. My students came to me and showed me this amazing thing called the World Wide Web. We sat around brainstorming about what we could contribute. Since we were working in robotics and had robots in the lab we thought, “Why don’t we connect a robot to this Web and let people control it from anywhere in the world?” We got super excited about the idea of having a robot do something that was ironic. We wanted to have it tend a garden. A garden is interesting because in some way it’s the last thing you expect a robot to be doing. I loved the juxtaposition of the natural and the digital worlds.
The installation had a planter with 18 inches of soil and an industrial robot arm in the center with a Web interface. Over 100,000 people visited. They could visit the garden from a Web page and participate by helping to water it, and then if they watered for a certain amount of time, they would get their first seed to plant. Interesting social dynamics emerged. People would ask others to watch their plants when they were away, which created a sense of community. Yet at the same time there was something incongruous about people sitting at their computers planting seeds.
And this led you to coin the term “telepistemology.”
Right. One afternoon I got an email out of the blue from a student in Texas who asked how he could be sure the garden was real. I was surprised because it seemed obvious. Then I realized it was a very deep question because many hoaxes had been done on the web, and it’s not too hard to imagine the whole thing could have been faked. We wondered, how would we prove that it was real?
Around that time I came to Berkeley and met with the philosopher Hubert Dreyfus, who is one of my heroes. He told me this is a very old question at the heart of epistemology: What is knowledge? What do we know? How do we know it? We would meet every two weeks to wrestle with this question and we realized it was interesting because technology had always had an influence on epistemology. When Galileo developed a telescope, and when the microscope was developed—this caused a radical shift in thinking that led Descartes to the sense of doubt that was the basis for the scientific method.
What were Dreyfus’s views on AI and creativity?
He’s always been a critic of artificial intelligence at any level. He was vocal about this in the 1960s when everybody was predicting that computer systems were going to be intelligent within the decade. He was one of the lone dissenters. He wasn’t a computer scientist but a philosopher saying, “No, you’re missing that a fundamental aspect of intelligence is experience and that requires embodiment.” He knew that to understand the world you needed to be inside the world, you needed to experience its behaviors and responses to you. Well, he was right. We may be making progress in being able to do things like recognize a cat in a photograph. But there’s a huge gulf between that and doing something creative.
In 1968, Marvin Minsky said, “Within a generation we will have intelligent computers like HAL in the film, 2001.” What made him and other early AI proponents think machines would think like humans?
Even before Moore’s law there was the idea that computers are going to get faster and their clumsy behavior is going to get a thousand times better. It’s what Ray Kurzweil now claims. He says, “OK, we’re moving up this curve in terms of the number of neurons, number of processing units, so by this projection we’re going to be at super-human levels of intelligence.” But that’s deceptive. It’s a fallacy. Just adding more speed or neurons or processing units doesn’t mean you end up with a smarter or more capable system. What you need are new algorithms, new ways of understanding a problem. In the area of creativity, it’s not at all clear that a faster computer is going to get you there. You’re just going to come up with more bad, bland, boring things. That ability to distinguish, to filter out what’s interesting, that’s still elusive.
Today’s computers, though, can generate an awful lot of connections in split seconds.
But generating is fairly easy and testing pretty hard. In Robert Altman’s movie, The Player, they try to combine two movies to make a better one. You can imagine a computer that just takes all movie titles and tries every combination of pairs, like Reservoir Dogs meets Casablanca. I could write that program right now on my laptop and just let it run. It would instantly generate all possible combinations of movies and there will be some good ones. But recognizing them, that’s the hard part.
That’s the part you need humans for.
Right, the Tim Robbins movie exec character says, “I listen to stories and decide if they’ll make good movies or not.” The great majority of combinations won’t work, but every once in a while there’s one that is both new and interesting. In early AI it seemed like the testing was going to be easy. But we haven’t been able to figure out the filtering.
Can’t you write a creativity algorithm?
If you want to do variations on a theme, like Thomas Kinkade, sure. Take our movie machine. Let’s say there have been 10,000 movies—that’s 10,000 squared, or 100 million combinations of pairs of movies. We can build a classifier that would look at lots of pairs of successful movies and do some kind of inference on it so that it could learn what would be successful again. But it would be looking for patterns that are already existent. It wouldn’t be able to find that new thing that was totally out of left field. That’s what I think of as creativity—somebody comes up with something really new and clever.
Can you think of an example?
One simple and beautiful example is a performance piece by Emma Sulkowicz, an art student at Columbia University, who said she was raped. She carried her mattress around campus for weeks. It was astoundingly elegant and simple and utterly original. Could a robot do that sort of thing? No. But—and this is my sentimental side—I don’t want it to happen. I want that to be something that we keep for ourselves as humans.
You and your wife made a short film about the “uncanny.” What is the uncanny?
It’s one manifestation of a broad theory that Freud wrote about in his 1919 essay, “The Uncanny.” We have a deeply unsettled reaction to something when it’s on the borderline between alive and dead or real and fictional. It goes back to gothic literature and questions about the line between man and machine. It arises out of the study of the automaton and thinking of the human body as a mechanism. That motivated automata makers to make systems that looked increasingly human-like and try to blur that line. It led to anxiety about what is real, what is human? What is alive versus what isn’t? You have a whole literature built around vampires and zombies and Frankenstein. It’s really at the root of all horror stories and it’s very operative in the realm of robotics. If you make a robot that’s too much like a human it becomes profoundly repulsive. So if you’re trying to create something you would want in your home, you don’t want it to look too human-like. This wonderful cultural phenomena is still very relevant today, where we can actively trigger the uncanny and it’s a very visceral real response.
How is it triggered today?
By something like Botox. When someone has too much Botox, there’s a creepiness to them. They’ve shifted their humanness a little too far and you start to feel anxiety. Google Glass triggers this. It creates this anxiety and ambiguity about whether the person wearing it is completely human or half-android. It also triggers the fear of surveillance. So you might say that Google Glass is the uncanny “double whammy.” It’s about simulacra and surveillance. It combines two aspects that people really don’t seem comfortable with.
Have you ever taken the Turing Test?
One reason why I got involved with robotics and AI was that when I was about 9, my cousins sat me down at a computer at the Lawrence Berkeley Lab and said, go ahead, talk to it. It was ELIZA, which I now know has about one page of code. I didn’t think it was human but I was intrigued by it. Part of its genius, the way it maintained the illusion of an intelligent conversation, was that it kept throwing back questions. “Why do you think so?” “Can you tell me more?” But as a friend of mine said, it’s not so much the computer passing the Turing Test as the human failing it. We do that all the time. We think that we’re talking to something but it’s actually that we’re not paying attention.
Could you imagine having a robot companion?
I never did until I saw Robot & Frank with Frank Langella. The Frank character was so real I found myself identifying with him. I imagined myself in his position: home alone and needing help or company—with a robot showing me stuff, telling me jokes. It’s not inconceivable that a robot could generate topics by mining my email and photographs and travelogues and suggesting a discussion about treehouses because it found a childhood photo of one and bookmarked articles on the topic.
You know how kids need to be educated about social media? Will we also need robot education? If it moves like a human and talks like a human, won’t people get confused?
That’s interesting. But we actually have a good inherent test about this. When my girls play with Siri they have fun but they know it’s not a real being.
This is a dark scenario. But what if a child were raised in an orphanage by robot nannies?
Right, it’s one thing to have a robot caretaker for an old codger like me, but when you turn that around and say the robot is taking care of my 3-year-old, it seems like there is something really wrong. But we park our kids in front of the TV to watch Sesame Street. We all do this. If you had something that could interact with them, to teach them a language or something like that, it could be interesting.
Do sophisticated robots challenge humans to be more aware of qualities that are unique to us?
If you have a robot caretaker six days a week then you will probably very quickly appreciate the human on the seventh day in a whole new way. I think robots help us appreciate human qualities that robots lack. I don’t want to say forever, but for as long as I can imagine there is going to be a sizeable gap.
What has working with robots taught you about being human?
It has taught me to have a huge appreciation for the nuances of human behavior and the inconsistencies of humans. There are so many aspects of human unpredictability that we don’t have a model for. When you watch a ballet or a dance or see a great athlete and realize the amazing abilities, you start to appreciate those things that are uniquely human. The ability to have an emotional response, to be compelling, to be able to pick up on subtle emotional signals from others, those are all things that we haven’t made any progress on with robots.
What’s the most creative thing a robot has done?
One of my favorites is by the engineer and media artist Raffaello D’Andrea. He worked with a sculptor and they designed a chair that would suddenly collapse. There was a pause and then all the pieces would start to move, find each other, and reassemble into a chair. There’s something very elegant about this idea of a chair that’s designed to fall apart and come back together on its own. It brings up all these whimsical ideas of magic and yet it’s a beautiful and very complex machine.
Can robots help humans be more creative?
That happens every day. All the new tools for making movies and making music have been enormously beneficial for creativity. And computers and robots are relieving us of tedious tasks like handling documents and filing. That allows us to spend more of our time being creative. Think of the time you would spend doing research for a book, going to the library, digging up information, walking through the stacks and finding out that the perfect book is gone, somebody else has taken it. Now you have access to all this information at your fingertips. Dozens of times when I thought I had a new idea I would go on the Internet and probe around only to find that somebody else had exactly the same idea and did it. I save myself all the trouble of trying to be creative with some thing that somebody has already done. And that frees me up to spend my creative energy on something else.
Jeanne Carstensen is a writer in San Francisco. Her work has appeared in The New York Times, Salon, Modern Farmer, and other publications.