The movies have brainwashed us into thinking that robots should look like people, but the revolution isn’t turning out that way. How are the machines changing, and how will they change us? DISCOVER, with the National Science Foundation and Carnegie Mellon University, posed these questions to four experts in a panel discussion and in video interviews with each scientist individually. Below are the video interviews and the transcript of the panel discussion.
Robin Murphy of Texas A&M is an expert in rescue robots; Red Whittaker of Carnegie Mellon designs robots that work in difficult environments; Javier Movellan of U.C. San Diego studies how robots interact with children; and Rodney Brooks of MIT founded iRobot, maker of the Roomba. Editor in chief Corey S. Powell moderated the panel.
Powell: Let’s start with the basics. What exactly is a robot?
Brooks: A robot is something that senses the world, does some sort of computation, and decides to take an action outside of its physical extremity. That action might be moving around, or it might be grabbing something and moving it. I say “outside its extremity” because I don’t like to let dishwashers be defined as robots.
Whittaker: I’m a little more liberal about that. I worked with robots that cleaned up the Three Mile Island nuclear accident [from 1984 to 1986]. Those were remote controlled, and one of the knocks against them was that they weren’t real robots. Those machines that are tour guides to the Titanic or our eyes on Mars don’t do a lot of thinking either, but they’re good enough in my book.
The idea that a system has to operate in space and time, with real-time constraints, is critical. It’s also critical to understand the intelligence of these things—it’s not just intelligence in general but an intelligence situated in a particular world.
Murphy: And I would add that sometimes this intelligence is shared. Is the robot just the physical entity at Three Mile Island or at a disaster site or on Mars? Or is it also right here with us? More and more robots are part of a shared cognition system.
Rodney, you’ve talked about four goals that robot researchers should be aiming for. What are they?
First, the object-recognition capabilities of a 2-year-old child. You can show a 2-year-old a chair that he’s never seen before, and he’ll be able to say, “That’s a chair.” Our computer vision systems are not that good. But if our robots did have that capability, we’d be able to do a whole lot more.
Second, the language capabilities of a 4-year-old child. When you talk to a 4-year-old, you hardly have to dumb down your grammar at all. That is much better than our current speech systems can do.
Third, the manual dexterity of a 6-year-old child. A 6-year-old can tie his shoelaces. A 6-year-old can do every operation that a Chinese worker does in a factory. That level of dexterity, which would require a combination of new sorts of sensors, new sorts of actuators, and new algorithms, will let our robots do a whole lot more in the world.
Fourth, the social understanding of an 8- or 9-year-old child. Eight- or 9-year-olds understand the difference between their knowledge of the world and the knowledge of someone they are interacting with. When showing a robot how to do a task, they know to look at where the eyes of the robot are looking. They also know how to take social cues from the robot.
If we make progress in any of those four directions our robots will get a lot better than they are now.
We’ve already had a lot of success with robots in space. The Obama administration recently announced plans to cancel NASA’s Ares rockets and Orion capsule [updated in Obama's April speech], which were intended to take humans back to the moon—and Red, you don’t look at all sad about that. Why not?
This is actually very good news for robotics. Robot missions don’t require immense launch payloads. You don’t have to keep humans warm, keep them fed and watered and breathing.
In fact, you’re working now on a robotic mission that will be done without government funding, right?
Google is offering a $20 million prize for the first robot that sends television signals from the moon, and I intend to win that. There are bonuses for traveling a certain distance and for navigating to a place where humans have sent things before [for instance to the Apollo sites]. That is more deliberative than robotic wandering. It’s nonfederal, but that’s how all of the great technological incentives work. When Lindbergh flew to Paris for his $25,000, it wasn’t a federal program. Great prizes completely transform people’s belief, catapult an industry, and drive technology. And they’re rather fun.
What about all the money NASA has already invested in these rockets? Is there a way to merge the manned program with a robotics program?
Robotic precursors could vastly improve the prospects for human exploration. For example, an orbiting spacecraft recently discovered the opening to a lunar cave. There are extensive caves on the moon, and they’re important because humans don’t do well in the extreme heat and the extreme cold on the surface. Those caves are waiting to be explored, but clearly no human would be the first one in. If you lose 10 robots exploring those caves, you don’t worry about it, but if you lose one person, it could shut down your entire space program.
There are also places near the poles of the moon where it stays light for months at a time. That is arguably the most valuable real estate in the solar system [because solar energy would be available so much of the time]. What a gift it would be for robots to confirm, survey, and establish these areas.
Brooks: In addition to precursor missions, the most astounding thing for us as humans will be if we discover life somewhere else. We’re looking at extrasolar planets [ones around other stars], but there are also a bunch of places in our solar system that look promising—the moons of Jupiter and Saturn, and back again to Mars. For the cost of two shuttle launches, we could have an extensive unmanned mission to each of those places. If Obama’s policy frees up money for robotic probes, it increases our chances of detecting life. That would open up whole new vistas in our understanding, and it would change us philosophically.
Another important potential role for robots is in search and rescue. Robin, how far have your robots gone in saving human lives?
Murphy: We have gone to places where rescuers couldn’t go because it was unsafe. We’ve certainly helped by providing surveillance and reconnaissance. In flooding disasters, for instance, you need marine vehicles to help check out bridges and seawalls. We’ve been at 11 disasters, including the World Trade Center, hurricane Katrina, and the Crandall Canyon, Utah, mine collapse. But we’ve never pulled out anybody who was alive with a rescue robot.
What can robots do in search and rescue that a person or a dog cannot do?
That’s the wrong question. We don’t need robots to replace humans and dogs. We need robots to extend the capabilities of people. People can’t be hummingbirds. Small helicopters, though, can evaluate structure, give overviews—oh, look, there are people over here; oh, there’s flash flooding over there. Aerial vehicles can fly under the forest canopies that piloted airplanes can’t see through. The same goes for looking underwater. People are not dolphins. We’re not replacing people with robot submarines; we’re extending our capabilities.
Right now, most of the people who survive after being trapped more than five days [after an earthquake like the one in Haiti in January or the one in Chile in February] are surface victims, people who were not heavily crushed. People farther in typically don’t get rescued in time. This is where technology can help. The trick is to make robots go deeper into the rubble, more than the 20 feet that the existing camera-on-a-stick technology can go. You want to extend people’s views 40 to 100 feet into the depths of the rubble, where you could find trapped survivors.
A recurring theme here is the challenge of getting humans and robots to work together. Javier, you’re doing that by pairing robots with small children.
That’s right. I started working with robots and toddlers by coincidence. I had this project with Sony to make an entertainment robot. We put a lot of engineering into our robots, but after 10 hours of playing with them, people would put them back in the box and that was it. We thought, that doesn’t happen with dogs and other pets; what’s going on? To find out, we decided to work with some of the toughest critics on earth: infants. They don’t speak, yet they are capable of developing very deep social interactions with other people. We thought there would be lots of things to learn about the way infants interact with robots.
The first thing we realized was that robots designed beautifully by engineers did not work in this environment. After two or three interactions, the babies got bored with them. Then we started designing children’s robots with smile detection. When we first turned one on, the kids started running around in panic. My son was one of the testers and I could hear him say, ‘Robot scary, robot scary.’ By the end of the project, though, mothers were telling me, “Javier, I am a little bit concerned that my child is constantly talking about your robot.” We had progressed that much. Critical to our success was the fact that we were always going to the field and testing. It was critical to put some form of emotional mechanism into these robots.
Why do kids need robots at all?
Robots are the best way we know to learn about how humans are put together. They are a reality check on our psychological theories, control theories, artificial intelligence, and so on. In addition, as we develop robots that can actually interact with humans and survive in these less controlled environments, we will find them to be an indispensable tool. Teachers really could use some help. We saw in some situations that robots could enhance children’s vocabulary skills. But at the moment, we’re working with one robot at a time. We want to scale this up to 1,000 robots that will be constantly sending us information about what it takes for children to learn, what kind of things the children are learning, what kind of things they are not learning, the effects of different environments, and so on.
There is a common fear that robots will replace humans—not just in space or on rescue missions but in manual labor. Is this concern warranted?
It’s not like there’s a plot afoot. Robotics isn’t lobbying for this. It just happens quietly over time—things get added on to existing technologies as features and improve productivity. Take agriculture. There was a time when surveyors put in stakes and marked the land. Now much of that can be automated right into a machine. Bulldozer blades governed by automated technologies are added on to what we would think of as heavy conventional machines. Big trucks in surface mines are benefiting from features of automation. We designed farming machines that harvest hay using machine vision systems, GPS guidance, and other sensing devices.
Brooks: People don’t realize how much of the food that we eat today has had a robotic element in its production. Tractors, as Red said, now come from the factory with GPS guidance systems. Not only are they guided, but they’re also using satellite data from those fields to know how much fertilizer to put where and how much seed to put in.
What about combat? This would seem a perfect place to eliminate humans, but some people worry about letting machines take control of warfare.
Brooks: I started a company that builds military robots. I think what scares people about robots is giving them targeting authority, letting them make decisions on who’s going to get shot. I don’t think that’s going to happen anytime in the short term—not in the U.S. military, at least [see “The Terminators,” page 36]. It may happen in devices used by insurgents and nongovernment combatants. Now might be a time for people to think about modifying the Geneva Conventions to put constraints on having robotic systems make targeting decisions by themselves. It may not be too long, however, before automatic systems are better at making judgments than humans are. You send a bunch of 19-year-old kids into a dark room with guns; if they think they’re getting shot at, they will shoot back without waiting to ask questions. A robot doesn’t have to shoot first.
So perhaps we should turn the question around and ask this: Will we reach a point where it is ethically imperative to use robots?
When we first started sending robots to Afghanistan to handle improvised explosive devices, U.S. military doctrine was to put someone in a bomb suit and send him out to poke it. That’s no longer the case. A robot has to be sent to poke the bomb rather than a person. At iRobot we regularly get letters from people out in the field saying, “Your robot saved my life today.”
Movellan: There is also an ethical imperative to do research so that we can invent the technology that would save lives in the future. My son had an accident and went to the emergency room. There was an amazing team of doctors and robots performing critical care. This coordination between the robots and the doctors is what saved my child’s life. What struck me at the time is that somebody made these robots. Somebody had funded the research that made them possible.
Rodney, you’ve said that people tend not to use the robots the way you want them to but rather in a way they want to use them. What do you mean by that?
Here’s an example. We got word back from the field in Afghanistan and Iraq that the soldiers didn’t like the menu system our engineers had designed. They said it “sucked”—I think that was the quote. Our solution was to ship controllers with designs based on Game Boys. After five minutes the soldiers had mastered 85 percent of the capability of the robot. It required a different way of thinking on our part about how someone was going to interact with that robot and what they were going to bring to the interaction, not what an engineer wants to do. Here’s another example that completely shocked us. When we built the prototype Roomba vacuum cleaner, there was a rocker switch on the first model. That was a total disaster. People couldn’t tell which way was On and which way was Off. We got calls to technical support all the time. Now Roomba has one button in the middle that just says Clean.
Are there other examples of similar design mistakes as people figure out how robots should work?
Murphy: We started doing rescue robotics back in 1995. In four years of theoretical work, we built a robot that could go over the rubble piles, at which point it would let out lots of little robots, which would be able to wend their way deeper into the rubble. Once we started working with rescue teams, we realized you’d never need the bigger robot. Only the small ones were of any use at all. If we had all been guys, the rescue team members probably would have looked at us and just said: “Oh, what geeks. Get out of here.” But they felt sorry for us, kind of like you would for your little sad niece. They said, “Come on, come back and try again.” It’s one of the nice things about being a woman in robotics.
Computers are said to double in power and memory every two years or so, following what is known as Moore’s law. Is there a Moore’s law in robotics?
Brooks: When I first came to the United States, I was sort of a gopher for Hans Moravec, who has been part of the robotics institute here at Carnegie Mellon for many years. Back then he was out at Stanford. In 1979 I remember working late at night with him when no one else was using the mainframe, and his robot would go autonomously through a crowded room. It took the robot six hours to go 20 meters. Ian Horswell, one of my graduate students at MIT, built a robot named Polly in 1992. It would give tours of the lab. It could operate for about six hours and go 2,000 meters. In the DARPA Grand Challenge in 2005, robots went 200 kilometers in six hours. So over a 26-year span, there were 13 doublings in capability, if you measure it as the distance a robot can go autonomously without human intervention in six hours. We have seen Moore’s law in action.
[Audience member] How will the proliferation of robots in everyday life change our language? A “phone,” for instance, is no longer merely something you talk on; it’s also a computer, a camera, a music player.
Whittaker: It’s not enough that robots achieve their purpose or look good doing it. They’re now expected to communicate at the same time. When NASA landed the Sojourner rover on Mars in 1997, people followed it on the Internet. With the next Mars rovers, Spirit and Opportunity, you could ask questions and get responses, but that was still a human writing back. The next step is for those machines to automatically generate conversation. One central feature for a moon robot [designed to compete for the Google Lunar X Prize] is that it needs to do a good job of tweeting to the world about where it is and where it’s going.
The meaning of the word appliance will change. Some number of years from now, there will be no appliance in your home that will not be robotic, that will not be connected to the Net and have some transformative functions that are hard for us to imagine now.
The word team is going to change too. In the future, if you don’t have some component of artificial intelligence with you, either software based or physically based, you can’t really be a team.
When we have intelligent systems that approximate the way people behave, it is going to change the definition of the word human. It’s going to make us rethink how we are put together and what it means to be human.
[Audience member] What about the liability problem? How do we prevent robotics technology from being derailed by lawyers and lawsuits?
If we had listened to lawyers, we would not have released the Roomba. They said that we would get sued all the time. We have not had one safety suit, even with 5 million units sold. We were careful. We put in triple redundancy for various safety systems, but at some point you have to believe that you have built a safe system and not worry about the minuscule chance that something goes wrong. Otherwise we would never advance technologically.
Murphy: I see a bigger worry in military robots. We see a lot of things coming out that are prototypes in the lab, and there’s a bit of a culture of “Well, I just have to make my widget work. I just have to get close enough and not think about the downstream consequences.” We’re beginning to see that with [unmanned aerial vehicles]: “It’s an autonomous system, and it behaved unpredictably because the environment was unpredictable.” If you don’t have good callback procedures—if you don’t have a culture of safety, a culture of looking ahead—you’re violating what’s called in the ethics literature “operational morality.” That’s where we as roboticists may be leaving ourselves vulnerable to the lawyers, whereas in the commercial world you learn very much how to make things safe and what is important.
[Audience member] You’ve talked about robots and human interaction, but what about the ultimate interaction—sex with robots?
Robots are the best way we know to learn about how humans are put together. They are a reality check on our psychological theories and our theories of artificial intelligence. As we develop robots that can interact with humans, we will find them to be an indispensable tool.
Not so long ago people didn’t have a clue how to make money with the Internet. When it was clear that selling sex was going to be an early winner, it attracted a lot of technical talent. There’s no question that it’s an inevitable future.
This has nothing to do with robotics per se. [Pornography] was pamphlets when the printing press was around. In the 19th century it was photographs. In my lifetime it drove the home video market and then the Internet. So one always has technology being driven to some degree by those things. When I wrote my book
, the editor demanded that I put something about sex in. So I just said, “I’m not imaginative enough to figure out how robots and sex are going to work out, but it will happen.”
Movellan: We’ve been talking about robots as machines that do tasks and about robots as machines that relate to people. It’s relatively easy to learn how to make machines that people fall in love with. And it’s not that complicated to make robots that people have fun having sex with. What is really going to teach us about human nature, however, is when we begin to think about what it takes to build robots that can fall in love with people. What does that mean? We don’t have any formalism to understand it yet. We can’t separate love and affect and emotion from what it means to be human and hopefully what it will mean to be a robot.
Will a true convergence of humans and robots ever happen? Will robots become so much a part of our lives that we forget they’re even there?
There will be a time when those distinctions are obscured. These machines are already farming and securing and exploring. Beyond that, in ways that relate more to humans, robots are already surgeons and playmates and entertainers. Some of the things that we have talked about here may give new meaning to tools and toys.
It’s unequivocal that we will soon forget the existence of robots in everyday life. If in the 1960s you had said, “I’ve got an idea for a company that’s going to put computers in coffee shops,” people would have rightly said, “You’re nuts.” But we have lots of computers in lots of coffee shops now. We don’t think about having computers in coffee shops; they are just there. There will be robots in coffee shops before long. There will be robots everywhere.