The nerd echo chamber is reverberating this week with the furious debate
over Charlie Stross' doubts about the possibility of an artificial "human-level intelligence" explosion – also known as the Singularity. As currently defined, the Singularity will be an event in the future in which artificial intelligence reaches human level intelligence. At that point, the AI (i.e. AI n) will reflexively begin to improve itself and build AI's more intelligent than itself (i.e. AI n+1) which will result in an exponential explosion of intelligence towards near deity levels of super-intelligent AI After reading over the debates, I've come to a conclusion that both sides miss a critical element of the Singularity discussion: the human beings. Putting people back into the picture allows for a vision of the Singularity that simultaneously addresses several philosophical quandaries. To get there, however, we must first re-trace the steps of the current debate. I've already made my case for why I'm not too concerned
, but it's always fun to see what fantastic fulminations are being exchanged over our future AI overlords. Sparking the flames this time around is Charlie Stross
, who knows a thing or two about the Singularity and futuristic speculation. It's the kind of thing this blog exists to cover: a science fiction author tackling the rational scientific possibility of something about which he has written. Stross argues in a post entitled "Three arguments against the singularity
" that "In short: Santa Clause doesn't exist."
This is my take on the singularity: we're not going to see a hard take-off, or a slow take-off, or any kind of AI-mediated exponential outburst. What we're going to see is increasingly solicitous machines defining our environment — machines that sense and respond to our needs "intelligently". But it will be the intelligence of the serving hand rather than the commanding brain, and we're only at risk of disaster if we harbour self-destructive impulses. We may eventually see mind uploading, but there'll be a holy war to end holy wars before it becomes widespread: it will literally overturn religions. That would be a singular event, but beyond giving us an opportunity to run [Robert] Nozick's experience machine thought experiment for real, I'm not sure we'd be able to make effective use of it — our hard-wired biophilia will keep dragging us back to the real world, or to simulations indistinguishable from it.
I am thankful that many of the fine readers of Science Not Fiction are avowed skeptics and raise a wary eyebrow to discussions of the Singularity. Given his stature in the science fiction and speculative science community, Stross' comments elicited quite an uproar. Those who are believers (and it is a kind of faith, regardless of how much Bayesian
analysis one does) in the Rapture of the Nerds have two holy grails which Stross unceremoniously dismissed: the rise of super-intelligent AI and mind uploading. As a result, a few commentators on emerging technologies squared off for another round of speculative slap fights. In one corner, we have Singularitarians Michael Anissimov
of the Singularity Institute for Artificial Intelligence and AI researcher Ben Goertzel
. In the other, we have the excellent Alex Knapp
of Forbes' Robot Overlords and the brutally rational George Mason University (my alma mater) economist and Oxford Future of Humanity Institute contributor Robin Hanson
. I'll spare you all the back and forth (and all of Goertzel's infuriating emoticons) and cut to the point being debated. To paraphrase and summarize, the argument is as follows: 1. Stross' point: Human intelligence has three characteristics: embodiment, self-interest, and evolutionary emergence. AI will not/cannot/should not mirror human intelligence. 2. Singularitarian response: Anissimov and Goertzel argue that human-level general intelligence need not function or arise the way human intelligence has. With sufficient research and devotion to Saint Bayes, super-intelligent friendly AI is probable. 3. Skeptic rebuttal: Hanson argues A) "Intelligence" is a nebulous catch-all like "betterness
" that is ill-defined. The ambiguity of the word renders the claims of Singularitarians difficult/impossible to disprove (i.e. special pleading
); Knapp argues B) Computers and AI are excellent at specific types of thinking and augmenting human thought (i.e. Kasparov's Advanced Chess). Even if one grants that AI could reach human or beyond human level, the nature of that intelligence would be neither independent nor self-motivated nor sufficiently well-rounded and, as a result, "bootstrapping" intelligence explosions would not happen as Singularitarian's foresee. In essence, the debate is that "human intelligence is like this, AI is like that, never the twain shall meet. But can they parallel one another?" The premise is false, resulting in a useless question. So what we need is a new premise. Here is what I propose instead: the Singularity will be the result of a convergence and connection of human intelligence and artificial intelligence. Intelligence is extremely hard to define. For the sake of discussion, I'll define it here as the "ability to analyze a situation, determine a problem, develop a solution, and execute." As Knapp's example of Kasparov's Advanced Chess illustrates, humans and computers are much better than one another at specific elements of chess. A computer is significantly more intelligent when it comes to chess tactics. A human is significantly more intelligent when it comes to strategy. Extrapolation of this analogy (as well as Knapp's analysis of Watson on Jeopardy!) points towards a human intelligence superiority around abstraction, invention, creativity, and imagination and a computer intelligence superiority in calculation, data analysis, and information retrieval. Thus, I propose a new analogy for the two types of intelligence represented by humans and computers: the right and left hemispheres of the human brain. It is often said that humans are the animal that can reason. But that description is incomplete. Humans are the animal that can reason creatively and abstractly, or perform the inverse, imagine logically and rationally. To my knowledge (I'd love to be corrected) computers and AI algorithms cannot at this point in time replicate any form of right-brain thinking. But computers are orders of magnitude better at short-term, sharp-focus left-brain thinking. Combine this line of thought with the extended brain hypothesis of Andy Clark and the augmentation-based Singularity survival strategy of David Chalmers, and picture of a cybernetic future begins to emerge. Thus, I argue the Singularity should be re-imagined as a cybernetic process in which the human mind is progressively augmented with better and more complimentary artificial left-brain capacities. As Advanced Chess demonstrates, a human with a computer is far superior to either a human alone or a computer alone. Consider the analogy of Geordi La Forge and the USS Enterprise computer being comparable with Data. Through the Enterprise, Geordi has access to the same vast processing power Data possesses, but also his own creative and inventive capacities that the Enterprise alone cannot mirror. Data's most "human" moments are when he expresses these right-brain tendencies and are, in fact, what are referenced when defending Data's personhood. It is what makes him unique and impossible to replicate with ease. At our current state of technology, smartphones represent the most advanced and prolific form of cybernetic left-brain augmentation. These hand-held exobrains allow us to perform a multitude of processes and recall or access tremendous amounts of information through visual and auditory interfaces. As neuro-interface technology
improves (hat tip Greg Fish
) the information on the internet and stored in our external brains will become more expansive and more intimately connected with our nervous systems. The steps toward the Singularity will not be progressive improvement of general AI but of the gradual blending of the biological wetware of the human brain with the artificial hardware of computer technology. The Singularity will be the perfection of the mind-computer interface, such that where the mental processes of the human right-brain ends and the high-powered computer left-brain ends will be indistinguishable both externally by objective observation and internally by the subjective experience of the individual. I call this event the Cybernetic Singularity. The Cybernetic Singularity differs from the AI Singularity in several ways and, in the process, solves several AI conundrums, both of the technological and philosophical variety. First and foremost, the ethical "can of worms" of making pure AI is eliminated. So long as the person having his or her mind augmented grants rationally informed and deliberative consent, then no breach of ethics occurs. The concern over experimentally creating, shutting off, or restrictively programming a new form of life is eliminated. Second, the problem of completely replicating the human mind is eliminated. Cybernetic augmentation will enhance those processes of the brain at which computers excel – memory, data analysis, and computation – without needing to replicate aspects of the brain we are barely beginning to understand, like imagination and creativity. Third, the theological fears and philosophical qualms around uploading will be mitigated by the slow integration and blending process. Theologians can presume the "seat of the soul" rests in the right hemisphere. Because the process is gradual and the self can reflexively begin to include the augmentations into the mind's "I" construction, the worries over mind-clones and other philosophical oddities are reduced to interesting thought experiments. Fourth, the technology is feasible. Memory stimulation, cochlear implants, bionic eyes, and haptic interfaces for prosthetics are rudimentary but empirical and existing forms of neuro-computer interfaces. Fifth, and most relevant for fans of the apocalypse, no "hard take off" or "AI bootstrapping" will occur. In part, because the blending will be gradual as interfaces and technology incrementally improve there will be no one augmented person who is unstoppably or even significantly more "intelligent" than other augmented individuals. Also in part because there will be a human being at the center of the cyber-brain, still able to make ethical decisions and express self-interest that expands to the universal level of humanity's self-interest. The final reason I believe the Cybernetic Singularity is more probable than the AI Singularity is simply that it makes more sense. AI's designed to do very specific tasks
that are labor and data intensive make economic sense and are of obvious value, AI's designed to mirror things humans are naturally good at seems pointless. Humans have augmented our memory, our ability to calculate, and our ability to process data reliably throughout history. We've been slowly augmenting our left-hemisphere since the invention of language. In sum, The Cybernetic Singularity is the logical extension of a process humans have been pursuing throughout history: the augmentation of our brain's computational left-hemisphere. By recognizing the relative functions of the hemispheres of the human brain, we are able to see how cybernetic augmentation of the left-hemisphere of the human brain will enable significant increases in some forms of intelligence. Pure general AI is not necessary for an intelligence increase. My theory of The Cybernetic Singularity reconciles the exponential increase in computing technology with the tremendous hurdles facing AI and overcomes the ethical, philosophical, and theological concerns around uploading and the creation of AI and/or mind uploading. The result is a human future that we can reasonably, incrementally, and ethically pursue. Follow Kyle on his personal blog
, Pop Bioethics, and on facebook
.Image via theWarehouse
Hat tip to Futurismic
for many of the links.