Portraits by Patrick Harbron
When it comes to having a robot around the house, humans don't seem willing to settle for the simple solution: a mechanized box on wheels. Instead, we appear to be obsessed with having the device resemble us as much as possible. And not just in looks but in the way it talks, walks, thinks, and feels. Just how far should we take this? How human should a humanoid robot be? Recently, Discover, in partnership with The Disney Institute, assembled an extraordinary panel of robotics researchers to grapple with the issue.
Charles Petit
Moderator Senior writer at U.S. News & World Report. Petit, who previously wrote for the San Francisco Chronicle, is a former president of the National Association of Science Writers. He has a degree in astronomy from the University of California at Berkeley.
Eric C. Haseltine
Senior vice president of research and development at Walt Disney Imagineering, where he oversees all technology projects and the Virtual Reality Studio. He also dreams up Discover's monthly NeuroQuest column.
Joe Engelberger
Founder and chairman of HelpMate Robotics, which produces a successful courier robot that runs around hospitals, delivering meals and medicine. Engelberger is working on a servant-companion for elderly people.
Kazuhiko Kawamura Heads the Intelligent Robotics Laboratory at Vanderbilt University. He and his colleagues are working on a variety of service robots, including a humanoid called ISAC that can shake hands, spoon out soup, and play the theremin.
Sebastian Thrun Codirects the Robot Learning Laboratory at Carnegie Mellon University. He has developed a series of interactive robots, including MINERVA, which has led thousands of visitors through a Smithsonian museum.
Marvin Minsky Cofounder of MIT's Artificial Intelligence Laboratory, which developed some of the first intelligence-based robots. Minsky is author of the landmark book on human intelligence The Society of Mind and coauthor of the sci-fi novel The Turing Option.
John McCarthy Professor of computer science at Stanford University. He coined the term artificial intelligence in 1955 and cofounded MIT's Artificial Intelligence Laboratory. He also wrote Lisp, the preeminent computer programming language for artificial intelligence.
Joe Herkert Assistant professor of multidisciplinary studies at North Carolina State University. Herkert assesses the long-range implications of advances in technology. He also edited a new IEEE book on the social and ethical aspects of engineering.
Alan C. Schultz Head of the Intelligent Systems Section of the Naval Research Laboratory in Washington, D.C., Schultz has focused on research in the areas of evolutionary robotics, learning in robotic systems, and adaptive systems.
CHARLES PETIT: The first thing I'd like to come up with is a definition of exactly what a "humanoid" robot is. What are we talking about here, Eric?
ERIC HASELTINE: The first thing, of course, that comes to mind is that the robot looks like a human. But it's important in my view to broaden the definition. Does it behave like a human? If you look at the cost of a humanoid robot, with all the mechanical components and the size and everything, it may well be that the first affordable humanoid robots won't necessarily look like humans but would still behave and relate and emote like humans.
CHARLES PETIT: Joe, you've been making things called robots for a long time. What do you think?
JOE ENGELBERGER: Well, I'm keen on a humanoid robot, but I don't think it has to be a Stepford wife. The appearance doesn't mean very much to me. If you get the behavior that can support our lifestyle and put it in terms of master-slave, then you've done the job, and it's an attainable job in the near future. I'm not sure that there's any real merit to a humanoid robot that does anything more than live like us, deal with our tools, deal with our data, and serve us like a slave. We can anthropomorphize it with our imaginations.
CHARLES PETIT: Kaz, you make robots, service robots. Are they something that can be called humanoid yet?
KAZUHIKO KAWAMURA: Well, I have some problems with the word humanoid. Before that, I have a problem with the word robot. A robot to me is some kind of intelligent machine we don't have yet. As for humanoid, that's something that exists just in our imagination. The problem is that people tend to think immediately in terms of Robocop or the Terminator.
CHARLES PETIT: Sebastian, you've built a robot that leads tours through a museum. Can you explain why you put a head on that thing?
Photograph by Davies and Starr
SEBASTIAN THRUN: I like to distinguish between two types of robots. One is the humanoid kind like those we see in Star Trek--robots that are indistinguishable from people by physical appearance. The second kind is what could be called humbots, that is, robots that intentionally use certain features of people but that are very clearly robots. The first type is interesting to pursue, but there really isn't a need to do so right now. We don't exactly have a shortage of humans on earth, and humans are still the thing that unskilled labor is capable of producing very well. The second type is where I think the opportunities lie. We've found that the interface is incredibly important in robots. If you build a robot that people have a short-term interaction with, you'd better make it connect with things people are familiar with.
CHARLES PETIT: We have predictions that in 30 years there will be robots that walk and talk like people and have emotions. They'll take your kids to the soccer game, tell you when to get up in the morning, and help pick out groceries. They'll even sit down and have a talk with you about how you're getting along with your wife or husband. But if we don't even understand how our own brains work, how will we be able to build something that intelligent? What do you think, Marvin? Is such a future possible?
MARVIN MINSKY: Oh, I'm sure it's possible, although one can't predict how long it will take. And we won't need to know all about how brains work, because there may be simpler ways. But it seems to me--and I disagree with most of the other people on this panel--that building mechanical robots that look like people so that they evoke emotional reactions is just a waste of time. It hasn't led to any improvement in knowledge about how to do the important things that would make machines really smart. The key problem is how to imbue computers with what John McCarthy and I call "commonsense reasoning." There's no robot "alive" today that knows general things, like if you let go of something, it will fall. No robot knows that you can pull something with a string but you can't push it with a string. Little things like that. The average 5-year-old knows a few hundred thousand of those things and an adult a few million. Today there are only a handful of people working on commonsense reasoning, so you can't say how long it will take before robots are truly smart.
CHARLES PETIT: John, we were talking about interfaces and whether robots ought to be made into companions and almost members of society. How do you feel about that?
JOHN McCARTHY: Well, I agree with Marvin substantially. It's probably easier to program in emotions than intelligence. Allow me to sharpen up a couple of points that have been made so far. Imagine that we have robot servants that will do housework, and imagine a child who is born into a family with robot servants. In my opinion, the robots need to be designed so that they will be regarded as appliances rather than as people. We don't want robots that people will hate or fall in love with or anything like that. We have enough trouble treating other people decently without inventing a new oppressed minority. Also, we are not yet within development range of a general-purpose household servant. There are conceptual problems to be solved. In a certain sense, a large part of the field of artificial intelligence is following the wrong tracks, and I'm sure I'm following some of them.
JOE ENGELBERGER: I want to reinforce John's statement that these robots should be regarded as appliances, but I disagree profoundly with the idea that we cannot yet develop a household servant-robot. The artificial intelligence research community doesn't realize how powerful the technology is that we have. I consider a household robot to be an appliance--one that cooks and cleans, offers an arm, handles security, fetches and carries, does my bidding in response to natural language, and carries out the kind of a conversation that an 85-year-old person does. We can do that right now.
SEBASTIAN THRUN: We are certainly in the position to build a mechanical aide, but I am not quite convinced that we can build the brains necessary for such a robot to carry out such a wide range of tasks in environments as diverse as private homes. In a way, I agree with John. It's easier to connect to people emotionally than make robots truly smart.
CHARLES PETIT: Joe Herkert, we've got a little bit of an argument going here. Is there also an ethical dimension to having machines look and act like people?
JOE HERKERT: Yes, I think so. We're very good at the how of technology, but we're not as good at the why. I'm particularly troubled by suggestions that we ought to be making robots that are so humanlike, both in terms of intelligence and emotions, that they become indistinguishable from humans. In the near term, it makes it more likely that people might shun contact with humans. We see this already--and I'm as guilty of this as anyone else--with our computers and tvs. Technology, while it presents possibilities for contacting others, also presents possibilities for isolating us from others. If I have a robot as a best friend, then I am less likely to have a person as a best friend.
MARVIN MINSKY: So what?
JOE HERKERT: Well, I would much rather have a person as a best friend.
MARVIN MINSKY: Yeah, but why? You're not speaking as a person but as an ethicist. Suppose that the robot had all of the virtues of people and was smarter and understood things better. Then why would you want to prefer those grubby old people? JOE HERKERT: Well, for one, I doubt that the robot would have the virtues.
MARVIN MINSKY: You can doubt it, but what's your basis? Now you're not speaking as an ethicist but as a skeptic. Suppose that robots will have all the virtues of people and more. Then where do you stand?
JOE HERKERT: Well, that's an assumption I can't accept. I can't accept that a robot would have ethics, for example. Only people can have ethics. But let's take it for granted that what Marvin and I were just jousting about can be achieved. Then I see a serious problem in human life being devalued. If we can have robots who have all our virtues and can do jobs more efficiently and never complain, why would we need imperfect humans? I fear that the people who would be most in jeopardy are the people who are marginalized in society--the poor, minorities, perhaps women, people in developing countries.
MARVIN MINSKY: That's what they said when Darwin started to promulgate his theory. I don't see anything wrong with human life being devalued if we have something better.
CHARLES PETIT: Darwin and evolution bring us to you, Alan. You're working with genetic algorithms and other means to actually breed better machines. Do you think evolution is going to play a role in the development of robots, and how?
ALAN SCHULTZ: From an engineering point of view, yes. Creating programs for machines is very difficult. I look at evolution as one method for building programs and letting them continue to learn and adapt once they are fielded. Let the program itself try to learn how to do its thing through its own experiments with the world--trial and error, if you will. Other people are looking at hardware that actually evolves--they use circuitry that can rewire itself. We have robots in our lab that learn to solve tasks by trying many things out in a simulation, and the programs that do better get to "reproduce" and create the next generation of robot programs that hopefully are more fit.
CHARLES PETIT: Take that further. Are these inventions going to invent themselves? Are we going to replace ourselves someday?
ALAN SCHULTZ: I hope we're not going to replace ourselves. Maybe that's my goal, though. I don't see robots themselves reproducing; I'm not trying to draw that picture. But robots may get together and exchange some software: "Hey, I have a good way to do this," says one robot, and the other replies, "Well, I have a better way." If this leads to a better robot, then it's survival of the fittest.
ERIC HASELTINE: In fact, with some of the work that you do, where the program designs itself, we cannot predict what's going to happen. Evolution has been a powerful kind of stochastic process by which we've arrived at where we are, and one could look at robots as just another step in human evolution, for example. The main point is, we don't know.
ALAN SCHULTZ: The question, though, is, Can it really become sentient? I just don't know.
ERIC HASELTINE: It doesn't have to become sentient to get rid of us. It could just be something it does, an epiphenomenon of some other desire it has.
KAZUHIKO KAWAMURA: I suggest that in order for humanoid robots to be socially acceptable, they must be absolutely safe for humans to interact with, have feelings, and be moral.
JOHN McCARTHY: George Orwell wrote, "On the whole, human beings want to be good but not too good and not quite all the time." Maybe we can make robots better than people. I don't see why we can't program them to be quite ethical--more than we are. But I don't think we really want to imitate certain characteristics of human emotions. For example, one's opinion as to what is worthwhile is affected by the state of one's blood and how tired one is, and not just by a rational evaluation of the state of affairs.
ERIC HASELTINE: You know, if the fulcrum of the issue on ethics is devaluing the human condition, we should look at the opposite side of that. To what degree is there a potential to increase the value of humans with robotics? The fact is, if you talk about poor people and women and minorities and so forth, if robots get powerful enough and cheap enough, the productivity of the whole population goes up, and that empowers everybody. Not only that, by freeing people to focus on the things that they uniquely do best, it may help them become more than what they are.
CHARLES PETIT: What do you foresee as some of the most impressive applications of humanoid robots, and how far off are they? What are the obstacles in the way?
SEBASTIAN THRUN: There are robots in janitorial services, entertainment, and so on that can do things that couldn't be done 20 years ago. There are robots in hospitals. If I extrapolate, I see much more of this coming. One of the biggest areas of need is with the elderly. The number one reason the old go into nursing homes is arthritis. People can't open their fridge anymore, they can't manipulate their microwave. One of the reasons I suspect industry hasn't moved more quickly on this is that people haven't yet caught on to the idea that robots are actually doable. There's very little money put into robots. If we could just get 1 percent of the money that's put into cars today, then a robot that assists you at home would be a reality in five to 10 years.
JOE ENGELBERGER: He's right. The only barrier today to a useful humanoid robot is money. We have the sensory perception, we have voice recognition and voice synthesis, we have sufficient computer power. We have all the tools. Somebody ought to be able to understand that building a personal-assistant robot is a multibillion-dollar opportunity: It would create a market much larger than the entire industrial robot market. Old people would fall all over themselves for a chance to live independently in their own homes by having 24-hour servants. It's $5 million and 27 months from reality testing in the home of an elderly, frail person. And yet, I'm having one hell of a time finding an industrialist who will agree.
ALAN SCHULTZ: I'm going to disagree with one thing. Joe, you've said that we have the devices that can do perception. I don't know about you all, but one of the biggest problems we have right now in robotics is that we don't have eyeballs and the processing that goes behind it.
JOE ENGELBERGER: Wait a minute. Let me tell you about our eyeballs. We have an eyeball system with a neck and nodding and recognition and ranging. But no one is commercializing it.
SEBASTIAN THRUN: It's actually not clear to me that we need human-level vision to build robots such as the ones Joe just described. Our CD changer doesn't use eyes and arms to manipulate CDs. We can fly and land airplanes autonomously without human-level vision. And our airplanes don't look like birds. We can build a range of useful service robots today. There are excellent image-understanding systems on the market.
ALAN SCHULTZ: Even if I could afford a good image-understanding system and a good sensor package, the cost of the pieces isn't even a small fraction of the total cost. It's integrating the pieces, making it work as one system.
KAZUHIKO KAWAMURA: Alan's right. How you put the pieces together is the critical issue. Just think about making automobiles. If you are asked to build a new automobile, do you say, "OK, I have the best engine from Honda, I have the best body from Ford, I have the best shaft from gm, etc."? Then when you try to put them together, it doesn't work. The integration is not always that easy. One thing we have not discussed so far is "borg" technology. A bioelectronic aide for people with no central vision--artificial eyes--will be available around 2010. Cyborgs are more likely to show up before humanoid robots do.
ERIC HASELTINE: I don't think anybody on this panel would argue that it wouldn't be a good thing for humans to have mechanical devices that take over some chores that humans now perform. But there's going to be a lot of continuing discussion over whether robots should look and feel like humans. One argument in favor of making them more humanlike is this widening gap between humans and technology. Technology changes so rapidly and humans so slowly. What's going to bridge that gap? It has to be, on the human side, something that looks familiar.
ALAN SCHULTZ: But why not two eyes in the back of our head? We all wish we had eyes in the back of our heads.
ERIC HASELTINE: Because it's what's intuitive. If you want to adapt technology to people, you have to go to where people are. You don't want to bring people to the technology.
MARVIN MINSKY: It's true they have to be somewhat familiar, but nobody hangs faces on toasters. We don't make most appliances look like people. The new point to me is the idea that we don't want people to learn to order around servants that look like people, because that's catching. If you tell a household robot to do unspeakable, disgusting, or just boring things, you'll get the hang of telling other people to. And most human interactions are rotten already. People lie, they cheat, they do all sorts of awful things. We've got to be careful not to say that things are OK as they are and that we want them to stay that way.
CHARLES PETIT: Plus we don't want to teach the robots to be rotten.
JOE ENGELBERGER: There's no necessity for robots to be evil unless, as Marvin says, we deliberately generate evilness in them.
ERIC HASELTINE: As with any technology, there is the potential for abuse, and it undoubtedly will happen. Robots will evolve pretty much like prime-time tv shows, based on the evolutionary forces in the marketplace. Whether we like it or not, what we as humans want is going to dictate the shape that robots take.
For more about the roundtable participants, try the following links: Alan Schultz: www.aic.nrl.navy.mil/~schultz; Kazuhiko Kawamura: mot.vuse.vanderbilt.edu/ece/~kawamura.htm or the Center for Intelligent Systems at shogun.vuse.vanderbilt.edu/CIS Joe Herkert: www4.ncsu.edu/unity/users/j/jherkert/jrh.html; Joe Engelberger: www.helpmaterobotics.comSebastian Thrun and The Robot Learning Laboratory: www.cs.cmu.edu/~rll Marvin Minsky: www.media.mit.edu/~minsky John McCarthy: www-formal.stanford.edu/jmc.