What is Consciousness?

Defining it is hard enough--giving it to a computer is even harder.

By Christof Koch, Terry Winograd, and Hans Moravec
Nov 1, 1992 6:00 AMNov 12, 2019 4:41 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

The Connected Brain

Christof Koch uses studies of visual awareness to open a window on the way the brain perceives the world around it. Two years ago Koch, a specialist in computer vision, teamed up with Nobel Prize-winning biologist Francis Crick, the codiscoverer of the structure of DNA, to devise a neurobiological theory of consciousness. This is not the consciousness that involves subjective feelings of pain or pleasure; although Koch believes this is an integral part of consciousness, he argues that it does not lend itself to rigorous tests and scientific analysis. Instead, he and Crick focused on a phenomenon observed by other researchers: that neurons in different areas of the brain, when presented with a common stimulus, will fire at roughly the same time. Koch and Crick argue that this firing is a way of temporarily uniting information in different areas of the brain, and it is this synchrony that generates the brain’s awareness. DISCOVER saw the 36-year-old Koch in his basement office and lab at Caltech, surrounded by various computer workstations and a cappuccino machine.

DISCOVER: You talk in your papers about obstacles to the study of consciousness, such as behaviorism and the computer paradigm. What do you mean by those?

KOCH: Behaviorism arose as a reaction partly to the German Gestalt school of psychology and, in general, to late-nineteenth-century theorizing about internal human drives. Behaviorists threw out any mention of internal states. They believed the only important thing to know about is human behavior, human actions. In effect these scientists were saying, I don’t know what you want, I don’t know about whether you’re aware or not. All I know is that if I stimulate you in this manner, you’ll respond in that manner.

Now, certainly you can describe some human phenomena in terms of stimulus and response; but very complicated abilities, like why you play, or things like that, cannot be explained in such terms. Still, that school of thought really blossomed in this country, particularly in medical schools. That’s why it’s difficult today to talk about consciousness with hard-core scientists. There’s a sense that once you start talking about consciousness, next you’ll talk about religion.

After the behaviorists come the cognitive scientists, and they say they’re going to be more rigorous about consciousness. And what they do is copy models from the digital computers. So they have a box called the input-buffer, and they have a box called the output-buffer, and they have a box called the CPU--all terms adopted from computer science. And these ideas are pernicious. Yes, to some extent we behave like a computer. Somewhere we store things, and we store things for short-time skill and long-time skill, so you can think of it like a random access memory, or like a disk memory. But then people go on and construct these models and they never worry about the nerve cell. I mean, where’s the input-buffer? Where are these things in the brain? What we have in the brain, as far as we know, is that everything is interconnected: you have nerve cells, which do both the information processing and the information storage. So, for instance, we don’t have a distinction between CPU and memory.

DISCOVER: Then how should you, or anyone, go about studying consciousness?

KOCH: Francis Crick has this analogy. He says people made sophisticated theories about genes in the 1940s and 1950s, yet genes were not understood until he and James Watson said, Let’s forget about these high-level function models; let’s go down to the molecular level. At the time they didn’t even have a precise definition of a gene. But they said, Let’s forget about that. Let’s not worry about the function, the exact definition--let’s go down to the level of molecules. And then it began to become very clear, and today we have the whole Human Genome Project. And so with consciousness, Crick says, it’s the same thing: Let’s forget about these high-level models; let’s first of all go to the level of the nerve cells because that’s where consciousness ultimately has to be.

DISCOVER: But in what way are you defining consciousness?

KOCH: Well, let’s first forget about the real difficult aspects, like subjective feelings, because they may not have a scientific solution. The subjective state of play, of pain, of pleasure, of seeing blue, of smelling a rose--there seems to be a huge jump between the materialistic level, of explaining molecules and neurons, and the subjective level. Let’s focus on those things that are easier to study--like visual awareness. You’re now talking to me, but you’re not looking at me, you’re looking at the cappuccino, and so you are aware of it. You can say, It’s a cup and there’s some liquid in it. If I give it to you, you’ll move your arm and you’ll take it--you’ll respond in a meaningful manner. That’s what I call awareness. And we know that there are patients who don’t have this.

DISCOVER: You’re speaking of neurological patients.

KOCH: Yes. Folks with something called blind sight. They’re quite a famous group of people. They’re often blind on one side because something happened in their brain; they had a stroke. Let’s say a man is affected on the right part of the cortex, so that he can’t see on the left side. He goes to the doctor at the clinic and the doctor says, Well, can you see the hand? And he says, No. I mean, why are you doing that? You know I’m blind. And the doctor says, Well, tell me, where is it? The patient says, I’m blind! Why are you doing this to me? The doctor says, Well, forget about it--just guess. And the patient says, Okay, I have no idea, but I’m just going to guess.

It turns out that he guesses correctly. Now, such people cannot do everything. For example, they cannot differentiate a circle from a triangle, or from a square. But they can do colors to some extent, they can do motion, they can do position. It turns out that here you have a beautiful instance of a divorce between awareness, or consciousness, and unaware knowledge. Something in the patient’s system knows about the position of the doctor’s hand because he can make a meaningful mode of response, but his mind does not know about it--there’s no conscious aspect.

This operates in normal life too--you drive home from work, the same drive you do every day. You sit down at the wheel, but you’re lost in some problem at work, and you arrive at your garage and suddenly you realize, Oh, I’m home. Yet you had to steer, you had to brake at lights, you had to accelerate; I mean, you had to do a lot of heavy-duty information processing. Well, what happened? Why were you not aware?

DISCOVER: You and Crick have suggested that the mind has something you call an attentional spotlight, which highlights selected parts of our environment, and that only then do we become aware of them. Without the spotlight, we are like the people with blind sight: our brain is somehow processing this information, but we’re not aware of it.

KOCH: You can think of it as a metaphor: You’re in a dark room and you have a flashlight. You can process information only where the light shines. You can also change the scope of the flashlight and you can move the light around at will.

The way this works in the brain is, we think, by oscillation, or semisynchronous firing, of neurons. I look at you, for example, and you move; neurons in my MT--a brain area that responds to motion--fire. You talk to me and neurons in A-1--an auditory area in the brain--fire. Now let’s say we’re outside and having lunch, and there are other people talking behind you. They also move their heads, and so their images will also strike my retina and will also fire neurons, let’s say, in MT. So I have to combine all the neuronal activity corresponding to your face and voice; plus, I have to segregate that from all the other neurons that are also responding at the same time to everything that’s going on around you. If I could label all these neurons responding to you with the color red and all the other neurons blue, then I’d have to track just the red neurons. One way to do this is oscillation. Think of a Christmas tree with a billion lights on it; each light flashes only occasionally. Now, there’s a subset of lights--a very small one, let’s say there are only 10,000--and they flash all together. If you just visualize that, you’ll see that it will stand out very readily.

Of course, there’s a problem. There’s a you; that is, there’s an observer. And there is no such observer in the brain. We don’t want to go back to the notion of a homunculus, looking at this.

DISCOVER: That would be going back to the idea of the little man in the brain.

KOCH: Exactly. But if I’m a nerve cell, and I get input, let’s say, from those 10,000 other neurons--if they all fire together they are going to have much more of an effect on my activity than if their firing is spread out over time.

In other words, all the neurons that currently attend to some aspect of you and your face and your voice all fire in a semisynchronous manner, roughly all at the same time. All the other neurons that respond to anything else found on my retina also fire, but they fire randomly. This synchronized activity releases some substance, some chemical substance, that sticks around, say, for ten seconds or two seconds or five seconds-- that constitutes short-term memory. Whenever I’m aware of something, whenever I attend to something, I put it in short-term memory. It’s a very tight link.

DISCOVER: What does the substance do?

KOCH: There are two things it could do: either it could just make these neurons fire for five seconds longer--that’s my trace of memory--or it could strengthen the synapses so those same synapses are much stronger for the next ten seconds; then I need a much smaller input to make those cells fire again. You have these 10,000 neurons becoming synchronized for a short time, then they again become desynchronized and start to fire at random, and some other group becomes synchronized--and it just moves over the brain all the time, very, very rapidly.

DISCOVER: What’s the scientific evidence for this kind of neuronal firing?

KOCH: People have known for a long time, since about 40 years ago, that neurons behave this way in the olfactory part of the brain, the olfactory cortex, in rabbits. These neurons seem to fire at a frequency of about 40 hertz when presented with a stimulus. In 1988 researchers discovered similar types of oscillations in the visual cortex of anesthetized, but awake, cats. The frequency range was greater, though, from between 30 and 60 to 70 hertz. In monkeys, some people see these oscillations when they do experiments to look for them, but some people don’t. It’s much more controversial.

DISCOVER: And this hypothetical substance released by these firings, no one has seen that?

KOCH: No, it’s just our hypothesis at this point. But we’re suggesting that these two things together constitute the neuronal level, the physical substrate, of awareness.

I do think this is a very tricky topic to address scientifically, and we’ve only attempted it very recently. Of course, Crick is so well known, he can do anything he wants. But 95 percent of the time I do sort of accepted stuff on computer vision. You know, low-level bread-and-butter stuff. We’ll get grants, we’ll be published, all that. This is, of course, much more interesting but much more speculative.

DISCOVER: So what prompted you to go so far out on a limb with this investigation?

KOCH: You know, in physics you get the idea that the big things have all been worked out--except in astrophysics--and you work, you do your Ph.D. with ten other people on an accelerator for five years to test the third digit behind the comma to measure some constant to the one-thousandth position. Not terribly exciting. But here it’s still incredibly new, anything can go--which also often makes it more crazy.

Talking About Thinking

If you want someone to build you a conscious machine, Terry Winograd is not your man. Consciousness, the 46-year-old Stanford computer scientist believes, is a slippery beast, created from the inside out. Machine makers, however, insist on trying to build it from the outside in. They assume that because we move an arm, there’s an equivalent to some lines of programming code in our head that says, Move that arm. But you can’t look at outside behavior, Winograd believes, and assume there’s an interior representation equal to it. Look at language, he says: any single word has multiple meanings and can produce different actions. Yet computers can be programmed only with languages in which one word means one thing. Artificial-intelligence researchers may dream of building conscious machines in the near future. But Winograd says that day is a long way off, at best.

DISCOVER: People involved in artificial intelligence are well known for their enthusiasm. These people are willing to say that in 40 years machines will be indistinguishable from humans in the things that they can do. When you first started working with computers, did you share this kind of enthusiasm?

WINOGRAD: In 1967, when I went to MIT as a graduate student, I shared the enthusiasm of the unexplored possibility. It wasn’t like I said, Oh, yeah, I know this is going to happen in the next ten years, or whatever it is. But certainly I had the enthusiasm of saying, Look, this is new stuff, and we don’t know what’s going to happen, and it’s really interesting to get out there and try.

DISCOVER: How did your viewpoint change?

WINOGRAD: Well, I guess the simple answer is that after working on the problem of language understanding for something like ten years, it was clear there were huge problems, that it wasn’t just a matter of, Okay, we’ve got most of it, now we need to knock off the details.

It became clear how much difference there was between the way we use ordinary language and the kinds of languages that were used to deal with computers. In a computer language there’s a fixed relationship between the items in the language and their meanings. So if and then in a programming language have particular meanings in terms of the compiler, and plus in arithmetic has a very particular meaning in terms of arithmetic. When you look at ordinary language, you realize that for almost any word you use there’s a sort of gradual shading from what you might think of as the real meaning into what we think of as metaphor and analogy and very open-ended uses.

For example, the program I did for language understanding works in this little world of toy blocks. Take the word block, all right? Now, the first thing you notice, in trying to devise a program, is that the word has multiple meanings. Suppose you say, I’m going to take a walk around the block. How does the machine know whether you’re going to walk around this little object or whether you’re going to go hiking? Of course, block means a lot of other things as well. Somebody might say, Could you stick a block underneath the wheel before you jack up the car? What they mean is a rock or a piece of wood or something. They don’t mean anything that has to do with a particular shape--it has to do with the car’s not moving. And block doesn’t have to refer to an object at all. You might say, Well, I was trying to write this paper, but there’s just some kind of block I can’t get over. We’re always using words in ways that, if you look at their pure definitions, aren’t exactly right, but we all understand them and we all use them that way.

That becomes very frustrating when you’re writing computer programs. You reach a point when you’re banging your head against a wall, and you start to say, Is it just that the wall’s a little hard and I need to bang harder? Or is it that my head’s not hard enough? But you might also say, Well, wait a minute, maybe this isn’t the right route. That’s what happened for me over the course of a few years in the late 1970s and early 1980s.

DISCOVER: How did you recast the problem of computers and language and intelligence and consciousness?

WINOGRAD: I think what connected them was focusing on the relationship between action, or being in the world, and cognition, or representation. If I’m moving my arm, and I stop and think about what I’m doing, I can say something like, Okay, I’m moving my arm. One can then reason that there must have been some thought in my head that said, Move my arm, and that thought somehow controlled the arm and it moved. When you build computers, that is exactly the way that you do it. You put in a piece of data structure or a command that calls up a particular sequence of operations. You can point to the particular piece of program and say, These lines, the code, are what did that action. That analogy, which is a very powerful one, makes you look for the way in which anything we observe in what people do has a correlate as, essentially, lines of code in their head.

But the fact that you as an observer can characterize and articulate what happened does not mean that in your consciousness or even your nervous system there is something that correlates to Now do Step A. For me the key shift was saying, It may be that the things we describe as observers are simply not happening that way. And therefore the goal that says, If we can describe it, we can build it, and that we can describe it with formal rules, isn’t necessarily going to be successful.

DISCOVER: Given that limitation, can we ever build computers that are intelligent or conscious?

WINOGRAD: That is, of course, a very difficult question to answer, because first of all, we don’t know what we mean by intelligence, and second of all, we don’t know what we mean by computer. If by computer what you mean is a device that could be built by people, then, of course, it’s a very open-ended question. What if you could actually reproduce a nervous system, essentially neuron by neuron, but build it out of silicon instead of protoplasm? Would it think? Would it be conscious? Well, I’m ultimately a materialist, so I would say of course. If you really could duplicate it piece by piece, it would be all the same pieces; there’s no ethereal soul that makes me have consciousness--it is in the physical properties of my brain and my nervous system. So if you take that very broad notion of computer, then it becomes a matter of whether you’re a materialist or not.

Now, this is a very different question from asking whether it could be done with something like the kinds of computers we now understand and know how to build. People have made a leap, I think, implicitly--and this is what you find when you talk to artificial-intelligence people--that those two are the same question.

The real question is: What is the actual nature of the thing you would have to build in order to have consciousness or intelligence or whatever it is? And I think the assumption that you’d build something like a digital computer program is worth questioning. In particular, question the assumption that when you see certain qualities of what an organism does, you can duplicate those by writing a piece of code that represents them.

DISCOVER: A layman might think of consciousness as being a kind of awareness of oneself and one’s surroundings. That definition would, I’d think, involve representations of those things. I guess that wouldn’t quite fit in with what you were just talking about?

WINOGRAD: Right. See, what I would argue is that the word consciousness in itself is like block or any other word, with very different shades of meaning. One notion of consciousness is this sort of reflective articulation. I’m conscious of what I’m doing if I can say to you, Well, what I’m doing is X, Y, Z. While I’m unconscious that I’m moving my hand in a certain way when I talk, I’m conscious of the fact that I’m trying to explain something.

That’s different from the consciousness that refers to the fundamental thing you experience occasionally when you wake up in the morning and just puzzle about something, like: Here’s this huge universe-- you know, zillions of stars--and on this planet, billions of people, and yet what it looks like to me, from my point of view, is a TV show. When I open my eyes, I’m here and it’s me and it’s right now and it’s right here and that has a particular privileged place in the entire universe. That’s your consciousness, right? It is not that you think about being here, that you think anything in particular, but just that it’s pure being in the world. I don’t believe my computer has that experience. It doesn’t have this feeling of Here I am in the world; it is running electrical signals around.

I think the problem in defining consciousness is that a sense of being isn’t something that has any direct external correlate. That is, you could have something that is a cyborg or robot or whatever, which from the outside does the same things as somebody who has that kind of consciousness, and yet the cyborg doesn’t have it. Of course, if you say that, then you are really pushing away from the empiricist, materialistic tradition. Because if there’s no externally observable difference, then it’s a meaningless construct according to an empiricist. But on the other hand, these are deep questions, like Why am I here? I mean, we all have that sense that there is something that is not external but has to do with the fact that we’re here, now, thinking. I can build a robot that says, I think, therefore I am, but it doesn’t mean the robot has that sense.

I think people intuitively tend to oversimilarize. When a computer voice says, How are you today? the immediate assumption is, What’s going on in there is a lot like what it would be like if I said, ‘How are you today?’ to someone. The fact that it happens to be a program that simply plays out a certain piece of voice recording is something I have to stop and think about.

DISCOVER: But I keep hearing people say, look, we have this amount of processing power on hand, and we’re increasing at this rate, and the human brain has this kind of processing power--so many operations per second--and so in 40 years, we’ll have computers that will do everything that humans can and will be as conscious as humans are.

WINOGRAD: I think that’s a quantity-quality fallacy. It says we make the assumption--and it’s a pure hypothesis, right?--that the stuff we’ve got is the right kind of stuff and we just don’t have enough of it. The more MIPs, more MFLOPs, the more whatever it is we’ll get. And the bigger the storage, the bigger this, that, and the other. There’s an appeal to that, because it’s certainly easier, technologically, to just get more, than it is to do something totally different. So it’s clear why there’s an appeal to computer companies or to someone who wants to sell more power, more processors. There’s a half-truth in it, which is, in fact, it will take a lot more processing. I don’t think, Oh, no, if we just had the right understanding, it would be a simple machine. I think the human nervous system is what it is partly because it’s got 1011 neurons, or whatever it is, and partly because they work in ways we don’t understand. But I think that the sort of half-truth part of it is saying that it’s sufficient to have a lot more processing, as opposed to adjusting it as necessary.

DISCOVER: Would greater understanding come out of more research into the organization of the brain?

WINOGRAD: Yes. It’s going to take a lot more study; of course, it’s hard to study the dynamics of nervous systems, but it’s happening. They now have nuclear magnetic resonance scans that can actually watch changes dynamically while they’re happening. If you could imagine the equivalent of a probe that measured the voltages and chemical concentrations at every single point in the nervous system at every single moment, you might learn a lot.

Of course, we have nothing vaguely like that. That’s going to take a lot of physics and biology, not artificial intelligence.

Minds With Mobility

For Hans Moravec, consciousness arises on the move. That’s appropriate because Moravec is the director of the Mobile Robot Laboratory at Carnegie-Mellon. Trained as a computer scientist at Stanford, Moravec, now 43, has spent his career pushing machines to cope with different situations and make intelligent choices. He’s come to believe that the two things are intrinsically connected. Consciousness, he argues, acts like a monitor on our existence--we’re checking out where we are, what we’re doing, and how we feel--and we need to do that because we’re constantly encountering new situations and we need to make sense of those encounters. Robots up to now have been basically sedentary creatures, specialized machines designed to act only in certain places and certain ways. Once robots are designed to move from environment to environment and to work in different situations, Moravec says, they can begin to attain a rudimentary form of consciousness. DISCOVER caught up with the peripatetic Moravec in Cambridge, Massachusetts, where he is on a sabbatical at Thinking Machines, a leading supercomputer firm.

DISCOVER: In the first sentence of your 1988 book, Mind Children, you wrote: I believe that robots with human intelligence will be common within fifty years.

MORAVEC: It’s not--at least in my circles--all that controversial a statement anymore.

DISCOVER: But in the 1950s people were predicting that we’d have intelligent machines within 20 years.

MORAVEC: Well, the first artificial-intelligence programs devised to make computers do something other than arithmetic were so remarkably successful, so surprisingly good, that it made people overconfident. But it’s pretty much universally accepted now that the problems they were addressing were actually the easy ones. High-level reasoning is something humans do so badly that it’s easy for computers to do as well as we do in that area.

Thinking for us, basically, is a trick, like a dog walking on two legs. It’s amazing we can do it at all, but you can’t really expect too much beyond the mere fact that we do it! So if you want to build a walking machine that walks as well as a dog on its hind legs, it’s not too hard. But if you want to build a walking machine that walks as well as a gazelle, I think you’re in a different ballpark. High-level reasoning is the dog version. Perception, and any place where perception is used to help our thinking, is the gazelle part.

DISCOVER: What do you think happened after that initial burst of excitement in the 1950s?

MORAVEC: Well, then they got to the hard problems. At MIT Marvin Minsky started attaching arms and cameras to a computer. It turns out that programming those machines to do the simplest things, like picking things up, is much harder than proving geometry theorems. And it’s because we set the standard. We’ve been practicing picking things up--seeing objects and picking them up--much longer than we’ve been practicing proving theorems.

Our perceptual motor circuitry, you know, is really very efficient. It’s been a life-or-death matter for 500 million years. It’s highly optimized, it’s very large, and by my calculation, it does the equivalent effective computing of about 10 trillion computations per second; whereas high-level reasoning--based on a comparison of existing computers--seems to be about a one-million-computations-per-second thing. If you look at the more extreme examples--our ability to do arithmetic, for instance--then we’re only at a tiny fraction of an instruction per second. So we’re really awful at some things, like normal reasoning. We’ve probably been practicing it for a little while, only a few hundred thousand years, maybe.

DISCOVER: How did you reach the figure of 10 trillion computations for our nervous system?

MORAVEC: I’ve done a calculation about what it would take to duplicate human vision, mostly based on the amount of nervous tissue involved and the efficiency that’s displayed by the retina. The number I get is something like a trillion computations per second. For chess, in contrast, it looks like a billion computations per second is good enough. That’s about the level of today’s best supercomputers. So for a machine that performs a motor test, like playing tennis, you probably need a computer with no less than about, well, for the whole nervous system, 10 trillion computations per second. So you probably won’t have a really good robot tennis player until you get that many computations per second to process all the sensory data and other things.

By using the retina, though, I may be overestimating, because in the case of invertebrate animals like the sea slug, it looks like just about every neuron has a function and a place. That means its nervous system is really highly optimized. Every neuron is doing exactly what it’s supposed to do and nothing else. Now, that can’t be the case in the human nervous system, because we have something like 100 billion neurons and trillions of connections and we still have a genome with only about a billion bits. So there isn’t enough information in the genome to precisely wire every neuron. Of course, you could have some general rules encoded, but that means you no longer have the option of using each neuron to its absolute maximum by wiring it up in a very specific way. So you’d expect that neurons were being used a little less efficiently in a vertebrate nervous system. And in fact, the observation that mammals have to learn a lot of their behavior indicates that we’re wasting a lot of neurons, because there’s all this general-purpose learning stuff there, which gets wired in a particular way. Presumably, if that particular way is all you wanted, you could get by with a tenth of the neurons, because you don’t need the general-learning structure. In any case, the retina seems to be very carefully wired.

DISCOVER: But once you have that raw computing ability, how do you translate it into intelligence or consciousness? In your book you say that mobility is the key.

MORAVEC: The ability to move puts you in such a varied set of circumstances that you require pretty general-purpose solutions--it rules out specialized solutions to problems. Mobility is a driver. I envision a universal robot, a machine that is like a personal computer except it operates in the physical world. It’s mobile and can manipulate things and has a sensor system. To orchestrate all that I think we’ll need a minimum thousandfold increase in computing power.

DISCOVER: Could such a robot be conscious?

MORAVEC: Well, it may be that that’s not really the right question to ask, because it may not be a simple yes-no question. You can imagine, for example, having multiple role models inside a single robot that pop up and have an effect once in a while, each one of which, if you asked it or if you examined it in the right way, would appear to be a focus of consciousness.

Imagine a simple robot with some basic sensors. Now imagine you have sort of a processing funnel coming in from the sensors. At the neck of the funnel all the important things and none of the unimportant things get through. Each signal at the neck of the funnel means something important to the robot. There’s fuel available might be one of them. There’s an ax- wielding Luddite nearby might be another. It’s basically the robot’s description of the world in very robocentric terms. Something to which the robot doesn’t have to respond usually doesn’t show up there at all, because the nervous system filters it out.

A natural example of this, which has shown up on at least a half- dozen nature films, is the rex wasp. This wasp digs a little hole, catches a caterpillar, brings it back to the hole, checks out the hole, then moves the caterpillar in and lays its eggs. But if somebody moves the caterpillar away a little bit while the wasp is checking out the hole, then when the wasp comes out it has to restart the whole program. It searches for the caterpillar, finds it, brings it back to the entrance, and checks the entrance again, leaving the caterpillar unattended. Then you move it away again. You can do this over and over again, and the wasp never seems to catch on. So there’s good reason to believe that the wasp is not conscious- -it has no monitoring process to detect this kind of a loop because in its way of life, it’s not usually a problem. There are not that many crazed naturalists to keep taking the caterpillar away.

But in a more complex organism, you might imagine that conditions like that come up fairly frequently, where something leads to something else, which leads to something else, which leads back to the first situation again. A complex organism is always doing different things, and sooner or later it’s going to stumble into one of these ruts. So it would make sense for the organism to have some sort of rut detector.

Now, where would be a good place to put such a detector? At the funnel neck, where there’s a much greater chance that the situation that plays itself out there will repeat itself almost exactly, because the important things are happening in the same sequence. So for the wasp, if there were such a funnel, a detector at the neck would say, Okay, we’re now in checking-out-the-burrow mode, and another one might say, Whoops, maybe the caterpillar disappeared. As a robot programmer, I would say that’s certainly where I’d like to put my card, because there are fewer places to check and all the data reduction’s already been done. So I could put a little watchdog circuit there, which if it detects a repetitive loop behavior, does something to interrupt it.

Now, that seems to me a pretty plausible story for the beginnings of consciousness. Certainly having a little loop detector is nothing like human consciousness--yet it is an independent process that is monitoring the behavior of the organism and its surroundings. It’s also something you probably want in a robot. If you have a maze-running robot, if it has any way of discovering that it’s just repeating its actions, then you have some way of getting out of that loop.

DISCOVER: Would a robot’s inner experience of, say, pain or fear be the same as ours?

MORAVEC: It would certainly act the same way. Third-generation robots--advanced robots, say, 40 years from now--will simulate the world. So if one runs into a situation where it almost falls down some stairs, it won’t just stop with conditioning itself not to do whatever led up to that. It will simulate everything, what would’ve happened if it had fallen. And its simulation will produce an internal picture of itself crumpled in a heap down at the bottom of the stairs. And that will look really awful. If the robot’s internal problem detectors have any sense at all, there will be one for saying, Crumpled in a mass at the bottom of the stairs is really, really bad. So whatever it was you did--really, don’t do that! If the robot has the ability to run through scenarios of the previous day, looking for improvements, then it’s going to be running through this scenario, with this terrible possible consequence, a lot. And it’s going to be looking for all sorts of ways to avoid it. So now this fear, you might say, has gotten a lot richer. It’s starting to resemble human fear. If you were to ask the robot what it felt, it might say, Yes, I think I was afraid, because all my analogies to physiological states mimicked what I know happens in humans in similar circumstances.

DISCOVER: Will designing such robots require a major undertaking, something like the Human Genome Project?

MORAVEC: Probably larger. The Human Genome Project, after all, is concerned with the process of getting fewer than 2 billion base pairs, which is really a very small number compared with--well, it depends how you measure it, but there might be 100 trillion synapses in your head. This project wouldn’t have to get the data on every single synapse but the number of microfacts about the world that will have to be slowly taught to robots by trial and error. So I think in many ways this will be harder than the genome project. But I think this is a major passing of the torch to a new form of life.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.