Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Why I'm Not Afraid of the Singularity

Newsletter

Sign up for our email newsletter for the latest science news

140431901_ac1301cc19_z.jpg

I have a confession. I used to be all about the Singularity. I thought it was inevitable. I thought for certain that some sort of Terminator/HAL9000 scenario would happen when ECHELON

achieved sentience. I was sure The Second Renaissance

from the Animatrix was a fairly accurate depiction of how things would go down. We'd make smart robots, we'd treat them poorly, they'd rebel and slaughter humanity. Now I'm not so sure. I have big, gloomy doubts about the Singularity. Michael Anissimov tries to restock the flames of fear over at Accelerating Future with his post "Yes, The Singularity is the Single Biggest Threat to Humanity.

"

Combine the non-obvious complexity of common sense morality with great power and you have an immense problem. Advanced AIs will be able to copy themselves onto any available computers, stay awake 24/7, improve their own designs, develop automated and parallelized experimental cycles that far exceed the capabilities of human scientists, and develop self-replicating technologies such as artificially photosynthetic flowers, molecular nanotechnology, modular robotics, machines that draw carbon from the air to build carbon robots, and the like. It’s hard to imagine what an advanced AGI would think of, because the first really advanced AGI will be superintelligent, and be able to imagine things that we can’t. It seems so hard for humans to accept that we may not be the theoretically most intelligent beings in the multiverse, but yes, there’s a lot of evidence that we aren’t. .... Humans overestimate our robustness. Conditions have to be just right for us to keep living. If AGIs decided to remove the atmosphere or otherwise alter it to pursue their goals, we would be toast. If temperatures on the surface changed by more than a few dozen degrees up or down, we would be toast. If natural life had to compete with AI-crafted cybernetic organisms, it could destroy the biosphere on which we depend. There are millions of ways in which powerful AGIs with superior technology could accidentally make our lives miserable, simply by not taking our preferences into account. Our preferences are not a magical mist that can persuade any type of mind to give us basic respect. They are just our preferences, and we happen to be programmed to take each other’s preferences deeply into account, in ways we are just beginning to understand. If we assume that AGI will inherently contain all this moral complexity without anyone doing the hard work of programming it in, we will be unpleasantly surprised when these AGIs become more intelligent and powerful than ourselves.

Oh my stars, that does sound threatening. But again, that weird, nagging doubt lingers in the back of my mind. For a while, I couldn't place my finger on the problem, until I re-read Anissimov's post and realized that my disbelief flared up every time I read something about AGI doing something. AGI will remove the atmosphere. Really? How? The article, in fact, all arguments about the danger of the Singularity necessarily presume one single fact: That AGI will be able to interact with the world beyond computers. I submit that, in practical terms, they will not. Consider the example of Skynet. Two very irrational decisions had to be made to allow Skynet to initiate Judgment Day. First, the A.I. that runs Skynet was debuted on the military network. In the mythos of the film, Skynet does not graduate from orchestrating minor battle plans or strategizing invasions in the abstract, but goes straight from the coder's hands to getting access to the nuclear birds. Second, in the same moment, the military rolls out a fleet of robot warriors that are linked to Skynet, effectively giving the A.I. hands and then putting guns in those hands. My point is this: if Skynet had been debuted on a closed computer network, it would have been trapped within that network. Even if it escaped and "infected" every other system (which is dubious, for reasons of necessary computing power on a first iteration super AGI), the A.I. would still not have any access to physical reality. Singularity arguments rely upon the presumption that technology can work without humans. It can't. If A.I. decided to obliterate humanity by launching all the nukes, it'd also annihilate the infrastructure that powers it. Me thinks self-preservation should be a basic feature of any real AGI. In short: any super AGI that comes along is going to need some helping hands out in the world to do its dirty work. B-b-but, the Singulitarians argue, "an AI could fool a person into releasing it because the AI is very smart and therefore tricksy." This argument is preposterous. Philosophers constantly argue as if every hypothetical person is either a dullard or a hyper-self-aware. The argument that AI will trick people is an example of the former. Seriously, the argument is that very smart scientists will be conned by an AGI they helped to program. And so what if they do? Is the argument that a few people are going to be hypnotized into opening up a giant factory run only by the A.I., where every process in the vertical and the horizontal (as in economic infrastructure, not The Outer Limits) can be run without human assistance? Is that how this is going to work? I highly doubt it. Even the most brilliant AGI is not going to be able to restructure our economy overnight. So keep your hats on folks, don't start fretting about evil AGI until we live in an economy that is solely robot labor. Until then, I just can't see it. I can't see how AGI gets hands. Maybe that's a limit on my vision. But if the nightmare scenario of AGI going sentient and rogue over night comes true, then I think we're all in good shape. Sure, it might screw up our communications networks, but it's not going to be able to do much of anything outside a computer. Anytime you start getting nervous, remember all the things we still need people to do, and how much occurs beyond the realm of the computer. In that light, the Singularity is just a digital tempest in a teacup. Image of a very scary computer bank by k0a1a.net's photostream

via Flickr Creative CommonsFollow Kyle Munkittrick on Twitter @PopBioethics

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 75%

    Subscribe

    Already a subscriber? Register or Log In