Walk by Cynthia Breazeal's workbench in MIT's Artificial Intelligence Lab, and you can't help but notice a hunk of aluminum filled with silicon chips and electric motors, a machine purposely shaped and sized like a human head. Actually, what you can't help noticing is that it looks lonely. Its big, red rubber lips are turned down in a frown, its fuzzy eyebrows are heavy, its curly pink ears appear crestfallen. Its huge baby-doll eyes are scanning the room, searching for someone.
So it's no wonder that when Breazeal comes into the room and sits down in front of her needy little robot, which she calls Kismet, its mood begins to change. Looking straight into Kismet's eyes, Breazeal offers a "human face stimulus." Kismet's eyebrows go straight up, making its baby blues appear even wider as it looks straight back at its creator with growing interest. "Mutual regard," she says. Then Kismet wiggles its ears up and down. "Greeting behavior," says Breazeal. Kismet's expressions alter into another state, the sort a parent loves to evoke in a child. Happiness. Kismet is all smiles. Breazeal, who created this creature, is not surprised: "Happiness is the achievement of a desired stimulus," she says.
Next she talks baby talk, cooing like a new mother. That keeps Kismet interested, smiling and watching. Just for contrast, Breazeal begins to sway back and forth. Uh-oh. Kismet doesn't like that at all. It looks annoyed, says Breazeal, because it's "overstimulated." Kismet turns up a lip, raises one eyebrow, lowers the other. The message is clear: Stop this nonsense! So Breazeal turns away, and Kismet grows calmer, but only for a while. Deprived of attention, face stimulus gone, Kismet grows sad. Breazeal turns around. Happiness returns.
Breazeal can keep this going, keep Kismet happy by paying constant attention to the robot as if it were an infant, which in a sense it is. She can, for example, pick up a toy stuffed dinosaur and begin playing with Kismet. Kismet likes that. But like an infant, Kismet can become tired. Enough of this, Kismet seems to say as it slowly begins closing its eyes and goes to sleep.
Before you ask the question, Breazeal has the answer: "The behavior is not canned," she says. "It is being computed and is not a random thing. The interaction is rich enough so that you can't tell what's going to happen next. The overarching behavior is that the robot is seeking someone out, but the internal factors are changing all the time." Each interaction is different; each exchange has its own action and narrative.
Still, there seem to be some missing elements in this clever personality. And Breazeal acknowledges that Kismet has not been programmed for every emotion. For example, she has been saving surprise for the day when she installs the learning algorithms she is working on. Then learning will become Kismet's desired stimulus; learning will make the robot happy. As Kismet begins to learn, Breazeal says, it will slowly become more socially sophisticated, like an infant becoming a small child. That will bring Breazeal a lot closer to achieving her real goal. "To me," she says without hesitation, "the ultimate milestone is a robot that can be your friend. To me, that's the ultimate in social intelligence."
When Cynthia Breazeal arrived at MIT in 1990 to begin working on her master's degree in electrical engineering and computer science, she didn't have a specific project in mind, only a general vision. She knew she wanted to build a robot. At first she was fascinated by the robots being built by Rodney Brooks, an MIT professor and innovator in artificial intelligence. At the time, Brooks was building insectlike robots, autonomous machines with simple sensors that enabled them to avoid obstacles. His defining idea was that biological behavior could evolve as much through interaction with the environment as through programming. He called this interaction in the real world "embodiment." Some of the technologies he developed were used to build Sojourner, the planetary rover that roamed the surface of Mars during the Pathfinder mission in 1997.
Breazeal, the daughter of a mathematician and a computer scientist, had grown up watching Star Wars and Star Trek. When she saw what Brooks was doing, she thought: "He's building R2-D2! That's what I want to do!" She didn't want to build a replica of R2-D2, but a machine in the same spirit, "a synthetic creature that was emotionally expressive, capable of being a friend." Even though R2-D2 communicated with whistles, squeaks, and body language, she thought it was "fun and funny, feeling and warm." Her kind of robot.
So before long Breazeal was following in Brooks's footsteps, helping build two insect robots and designing their software. One, which she called Attila, was a planetary rover funded by NASA's Jet Propulsion Laboratory. Hannibal, a robot with 19 moving parts and 60 sensory inputs, went to Death Valley to participate in a convention for planetary mini-rovers. The two projects eventually became her thesis. "Cynthia's master's thesis could have been a Ph.D. at most places," Brooks says.
Not long after Breazeal began working at MIT, Brooks took a sabbatical. When he returned, he said he wanted to move beyond insect robots. He had one big project left in him, he said: an android inspired by Commander Data in the Star Trek television series. He had decided his masterwork--a humanoid called Cog, for "cognitive"--would be given human experiences and would learn intelligence by interacting with the world.
So the work began. Brooks's team built a body from the waist up. He wrote a new computer language for programming it and designed a new operating system. Breazeal developed a stereo vision system. Another researcher made a Ph.D. dissertation out of developing the robot's arms. Breazeal and grad student Brian Scassellati worked on an attention system, giving Cog the ability to distinguish between sound, color, and movement, and to orient itself toward a preferred source. They modeled the system on the Superior Colliculus, the part of a vertebrate's brain that takes information from the senses, then figures out where stimuli are coming from and which muscles to move in response. Within a few years Cog could make eye contact and move its head to track a moving object. Then Cog could move like a human. Its formidable but gentle arms could throw and catch a ball, play with a Slinky, point at things, even listen to rock and roll and beat out a corresponding rhythm on a snare drum.
Cog began taking on an aura of human possibilities. One day, during a videotaping, Breazeal picked up an eraser and wiggled it in front of the robot. Cog looked at the eraser, then reached out and grabbed for it. Breazeal wiggled it again. Cog reached for it again. Brooks watched the tape and was taken aback. Wait a minute, he thought. We haven't put turn taking into the program yet! "It looked like the robot was deliberately taking turns, but we knew it wasn't. Cynthia was unconsciously capitalizing on little bits of dynamics in the robot. She was doing the turn taking and making this game out of it."
Meanwhile, Breazeal began studying the process of cognitive development in children. "I began to get interested in the idea that infants are the simplest people. I wanted people to be able to naturally fall into the mode of treating the robot as if it were an infant, to naturally help the robot as much as they could."
And she began to think that the statuesque android was not right somehow. "You look at Cog and you realize that no one is going to treat Cog like an infant," she says with a laugh. "It's this huge linebacker robot. Cog is 6-foot-5, with shoulders out to here. Its face is way up there, you can't get close to it, and it has no facial expressions.
"So I started thinking about building a separate robot platform that focused on communication abilities that I cared about. I wanted to have face-to-face exchange, with expressions and eventually vocalization. I wanted to look at what you could do with that kind of close proximity and social interaction. And that's when I started building Kismet, in the summer of 1997."
Breazeal took a spare Cog head and reengineered it, lengthening the neck, adding a jaw. She got the eyes from a special effects supplier in Los Angeles. He told her to make them big and blue, like the Gerber baby, if she wanted people to treat her robot like an infant. He also helped her embed a color CCD camera into the pupil of each eye. She set up small motors to move facial features--eyebrows that lift and arch, ears that lift and rotate, jaws that open and close, and lips that bend, straighten, and curl. A network of three integrated circuits control perception and attention. Three more drive the facial motors, the motivational system, and eye and neck motion.
Breazeal also had to write special software consisting of what she calls "drives" and "emotions." Drives are similar to needs, and there are three in Kismet. The social drive becomes a need for people, the stimulation drive seeks toys and other objects, and the fatigue drive creates a need for sleep. Each drive has a normal position, a place it wants to be, which Breazeal calls the "homeostatic regime." When unattended, the drives move into an understimulated regime, so needs grow for social interaction and stimulation. Too much activity and the drives drift into the overstimulated regime, and a need grows for a break from the action.
Kismet's emotions are not linear. They occupy a kind of three-dimensional place Breazeal calls "affect space." The emotions break down into high and low values, positive and negative states, open or closed positions, each conforming to research on actual human emotions. Breazeal says happiness, for example, is a "positive state with a neutral value and an open position." In human terms, free and easy with no worries; life is good.
Happiness is near the center of the homeostatic regime, a state to which the robot aspires. The expression of interest is also within the homeostatic regime. Calm is there too. Outside the homeostatic regime lie other responses: sadness, boredom, fear, disgust, and anger.
Depending on sensory information flowing in through Kismet's eyes and the status of its motivational system, strategies form that activate behavior. If the social drive is overwhelmed, an "avoid person" strategy forms, and Kismet may assume an expression of annoyance and look away. A "seek person" behavior develops when the social drive is in the understimulated regime, and the face forms an expression of sadness. When Kismet is in the homeostatic regime, happy and interested and receiving a good face stimulus, the "engage person" behavior remains active. As drives move and shift and become satiated, other behaviors rise and fall: "engage toy," "avoid toy," and "sleep." For Kismet, the process of maintaining drives in the homeostatic regime is never-ending.
Kismet even has a strategy for self-protection. Like an infant who can fall asleep in a noisy room, Kismet can doze off if the environment becomes too stimulating. A nap refreshes: Kismet undergoes a "motivational reboot," awakening in the homeostatic regime, happy and interested. Or self-protection can evoke annoyance. If that doesn't change the situation, Kismet shows anger or fear. Then, at maximum stimulation, when it can't take any more, Kismet tunes out and looks away.
Inspired by her study of the relationship between a mother and her infant, Breazeal wanted to create a powerful social manipulator. Just as a baby uses expressions, kicks, and cries to manipulate the mother into satisfying its needs and desires, Kismet is designed to engage people in a wide variety of social interactions that satisfy its internal drives. Because the nature of Kismet is ultimately to learn, to become more sophisticated, and to develop as a social creature, it is driven to engage people and to keep them engaged.
Breazeal believes interactions between a human and a robot must be meaningful for both, which means the person has to find the creature believable. The robot must appear to have "intentions, desires, and beliefs," she says. It must have intentionality. Fortunately for Kismet, humans are easy prey for intentionality. They seem to be "hardwired" for it, Breazeal says. Mothers, for example, believe that their infants understand them: "We anthropomorphize all kinds of things, our pets, our cars, our computers. Whenever we have to engage something in an intimate, interpersonal way, we very naturally fall into this intentional mode. People get angry at their cars for making them late for work, and of course they know that the car is just being a car, but it's natural for them to relate to something, to interact in a personal way."
With an android, as with a good story, there must also be a willing suspension of disbelief. Again and again, when people visit the MIT lab and sit down in front of Kismet, they quickly fall into an exchange and take on a caregiving role. They want the robot to respond positively to them. "When people see Cog," Breazeal says, "they tend to say, ŒThat's interesting.' But with Kismet they tend to say, ŒIt smiled at me!' or ŒI made it happy!' "
Breazeal says her work is still research, "so much at the beginning that we haven't even begun yet." But before long a more advanced, adapted version of Kismet will be sitting on Cog's shoulders. And Kismet has become a step to a future that Breazeal and Brooks deeply believe in, a future they both want. At some point robots and humans will coexist, Breazeal says. And they won't just be appliances; they'll be friends. "This is the kind of robot that goes beyond something that just takes out the trash or delivers medicines in a hospital," she says. Brooks says developing androids challenges humans' "last refuge of specialness. At first we thought the Earth was the center of the universe. And then there was Darwin. And then Crick and Watson showed that we're all made from the same DNA, essentially. And they said that a computer couldn't play chess, and when a computer could play chess, they said it couldn't feel. We're trying to push on that boundary. That's all that's sort of left to us, all there is to be special about. And so we're trying to see if we can make a machine that has emotions. We don't know exactly how to do it, but we're trying to do it."
"What we're doing," Breazeal says matter-of-factly, "is putting humanity into the humanoid."
And if they're successful, will it be a person?
"How will we know?" she answers.
Can We Build One For You? In more than 70 hospitals around the world, courier robots called Help Mates trundle down halls, call elevators, and deliver meals. In Sweden, appliance maker Electrolux is product-testing a sleek, round robotic vacuum cleaner. In Washington, D.C., a robot tour guide called Minerva recently showed visitors around the chaotic halls of the Smithsonian. People blocking her path were treated with a frown and a stern request to move. Nearly 20 percent of the Smithsonian's visitors who met Minerva said she seemed as intelligent as a person.
Do these real-world robots finally herald a new age, the dawn of androids? Hans Moravec of Carnegie Mellon University's Robotics Institute says yes. In 30 years, he predicts, we'll have butler-type robots with monkeylike thinking power. These robots will be able to say how they feel and will put flowers out if they think you're sad.
At the Humanoid Research Laboratory of Tokyo's Waseda University, Atsuo Takanishi says we'll have a "certain level of useful humanoid robots within 30 years." He has built a gleaming robotic head called we-3RII, that shows expressions like happiness and disgust. In the future the head could be attached to Waseda's WABIAN robot, which is learning how to move like a person and can even dance.
Takanishi says robots will never duplicate humans, but Moravec believes robots will soon begin evolving in simple steps just as humans did, although 10 million times faster. He expects robots will surpass human intelligence in the next 50 years: "The stages are quite natural. The reason I can be really confident about them is they've already happened once in our own evolution, so all we've got to do is do it again."
Kazuhiko Kawamura of Vanderbilt University is more skeptical. "I don't think we'll see machines that can think in an abstract way and discuss the existence of God in our lifetimes," he says. But he doesn't say it won't happen: "As long as there are robotics researchers like us, sooner or later I'm sure we'll make it. That's my hope." --Fenella Saunders