When discussing consciousness, it is easy to consider only the observable and measurable attributes that we associate with being conscious. But this approach misses the essence of this ineffable concept. Our ability to express a loving sentiment, to get a joke, or to be sexy are simply types of performances—impressive and intelligent perhaps, but skills that can be observed and measured. Although it is difficult to figure out how the brain accomplishes these sorts of tasks, and what is going on in the brain when it does (indeed, that represents perhaps the most difficult and important scientific quest of our era), that still misses the true idea of consciousness.
My own view is that consciousness is an emergent property of a complex physical system. In this view, a dog is also conscious but somewhat less so than a human. An ant has some level of consciousness, but much less than that of a dog. The ant colony, on the other hand, could be considered to have a higher level of consciousness than the individual ant; it is certainly more intelligent.
By this reckoning, a sufficiently complex machine can also be conscious. A computer that successfully emulates the complexity of a human brain would also have the same emergent consciousness as a human.
My objective prediction is that machines in the not-so-distant future will appear to be conscious. They will be convincing when they speak of their qualia—the fundamental experiences of consciousness, like the sense of the color red or the feeling of diving into water. They will exhibit the full range of familiar emotional cues; they will make us laugh and cry; and they will get mad at us if we say we don’t believe they are conscious. (They will also be very smart, so we won’t want that to happen.) We will accept that they are conscious persons. My subjective leap of faith is this: Once machines succeed in being convincing when they speak of their conscious experiences, they will indeed be conscious persons.
I have come to my position via the following thought experiment. Imagine that you meet an entity in the future—a robot or an avatar—that is completely convincing in her emotional reactions. She convinces you of her sincerity when she speaks of her fears and longings. In every way, she seems conscious. She seems, in fact, like a person. Would you accept her as a conscious person? If this entity were threatened with destruction and responded, as a human would, with terror, would you react in the same empathetic way that you would if you witnessed such a scene involving a human? For me, the answer is yes, and I believe the answer would be the same for most if not virtually all other people, regardless of what they might assert in a philosophical debate.
There is certainly disagreement among scientists and philosophers on when, or even whether, we will encounter such a nonbiological entity. My own consistent prediction is that this will first take place by 2029 and become routine in the 2030s. I base that prediction on my “law of accelerating returns,” which I describe in my book The Singularity Is Near. In short, an evolutionary process, in biology or technology, inherently accelerates as a result of its increasing levels of abstraction, and its products grow exponentially in price- performance and capability.
But putting the time frame aside, I firmly believe that we will eventually come to regard such entities as conscious. Consider how we already treat them when exposed to them as characters in stories and movies: R2D2 from the Star Wars movies, Data from the TV series Star Trek: The Next Generation, WALL-E, and Rachael the Replicant from Blade Runner (who, by the way, is not aware that she is not human). We empathize with these characters even though we know they are nonbiological. We regard them as conscious persons, just as we do biological human characters. We share their feelings and fear for them when they get into trouble. If that is how we treat fictional nonbiological characters today, then that is how we will treat real-life intelligences in the future that don’t happen to have a biological substrate.
If you accept the leap of faith that a nonbiological entity that is convincing in its reactions to qualia is conscious, then you accept my conclusion that consciousness is an emergent property of the overall pattern of an entity, not the substrate it runs on.
There is a conceptual gap between science, which stands for objective measurement and the conclusions we draw from it, and consciousness, which is synonymous with subjective experience. Some observers question whether consciousness itself has any basis in reality. But we would be well advised not to dismiss the concept as a polite debate between philosophers.
The idea of consciousness underlies our moral system, and our legal system in turn is built on those moral beliefs. If a person extinguishes someone’s consciousness, as in the act of murder, we consider that to be immoral and, with some exceptions, a high crime. Those exceptions are also relevant to consciousness: We might authorize police or military forces to kill certain conscious people to protect a greater number of other conscious people. If I destroy my own property, it is probably acceptable. If I destroy your property without your permission, it is probably not acceptable—not because I am causing suffering to your property but rather to you as the owner of the property. If my property includes a conscious being such as an animal, then I as the owner of that animal do not necessarily have free moral or legal reign to do with it as I wish; there are laws against animal cruelty.
Because much of our moral and legal system is based on protecting conscious entities, we need to answer the question of who is conscious to make responsible judgments. That question is not simply a matter for intellectual debate, as is evident in the controversy surrounding an issue like abortion, which is fundamentally a debate about when a fetus becomes a conscious person. What’s more, if we regard consciousness as an emergent property of a complex system, we cannot take the position that it is just another attribute (along with “digestion and lactation,” to quote philosopher John Searle). It represents what is truly important.
The word spiritual is often used to denote things of ultimate significance. Many people don’t like to use such terminology in relation to consciousness because it implies sets of beliefs that they may not subscribe to. But if we strip away the mystical complexities of religious traditions and simply respect “spiritual” as implying something of profound meaning to humans, then the concept of consciousness fits the bill. It reflects the ultimate spiritual value.
People often feel threatened by discussions that imply a machine could be conscious; they view considerations along these lines as a denigration of the spiritual value of conscious persons. But this reaction reflects a misunderstanding of the concept of a machine. Such critics are addressing the issue based on the machines they know today, and as impressive as they are becoming, I agree that contemporary examples of technology such as your smartphone and notebook computer are not yet worthy of our respect as conscious beings.
My prediction is that tomorrow’s machines will become indistinguishable from biological humans, and they will share in the spiritual value we ascribe to consciousness. This is not a disparagement of people; rather, it is an elevation of our understanding of (some) future machines.
Ray Kurzweil, author of the New York Times best-seller
is a pioneering inventor and theorist of artificial intelligence. Reprinted by arrangement with Viking Penguin, a member of Penguin Group (USA) Inc., from How to Create a Mind by Ray Kurzweil. Copyright © 2012 by Ray Kurzweil.
Head in the Cloud
Merging our brain with computers will infinitely expand our intelligence.
We are now in a position to speed up the human learning process by a factor of thousands or millions by migrating from biological to nonbiological intelligence. Ultimately we will create an artificial neocortex that has the full range and flexibility of its human counterpart. Consider the benefits. Electronic circuits are millions of times faster than our biological circuits. The digital neocortex will be much faster than the biological variety, and will only continue to increase in speed.
When we augment our own neocortex with a synthetic version, we won’t have to worry about how much additional neocortex can physically fit into our bodies and brains, as most of it will be in the “cloud,” stored on the Internet and retrieved wirelessly, like most of the computing we use today. I estimate that humans have on the order of 300 million pattern recognizers—the basic and ingenious mechanism for recognizing, remembering, and predicting a pattern, which accounts for the great diversity of our thinking—in our biological neocortex. That’s as much as could be squeezed into our skulls, even with the evolutionary innovation of a large forehead and with the neocortex taking about 80 percent of the available mass. As soon as we start thinking in the cloud, there will be no natural limits. We will be able to use billions or trillions of pattern recognizers, basically whatever we need, and whatever the law of accelerating returns can provide at each point in time.
In order for a digital neocortex to learn a new skill, it will still require many iterations of education, just as a biological neocortex does. But once a single digital neocortex learns something, it can share that knowledge with every other digital neocortex without delay. We can each have our own private neocortex extenders in the cloud, just as we have our own private stores today of music, digital photos, and other personal data. We will be able to back up the new digital portion of our intelligence. It is frightening to contemplate that none of the information in our neocortex is backed up today.—R. K.