Register for an account

X

Enter your name and email address below.

Your email address is used to log in and will not be shared or sold. Read our privacy policy.

X

Website access code

Enter your access code into the form field below.

If you are a Zinio, Nook, Kindle, Apple, or Google Play subscriber, you can enter your website access code to gain subscriber access. Your website access code is located in the upper right corner of the Table of Contents page of your digital edition.

Mind

Google Glass: Artificial Unconscious?

Neuroskeptic iconNeuroskepticBy NeuroskepticMay 25, 2013 1:32 PM

Newsletter

Sign up for our email newsletter for the latest science news

Google Glass is cool. But could it be philosophically dangerous?

wittgenstein_glass.jpg

60 years ago, Ludwig Wittgenstein famously wrote:

Where does this idea come from? It is like a pair of glasses on our nose through which we see whatever we look at. It never occurs to us to take them off.

The "idea" in this case was a particular philosophical theory about language. Wittgenstein saying that other philosophers were making use of this idea without realizing it, unconsciously - so he chose the metaphor of glasses, which are always right before us, filtering what we see, even though we're rarely aware of them. Perhaps all technology so far has been an extension of the conscious parts of our mind. Computers let us to do the things we consciously choose to do, better. To talk over distances, remember more accurately, see and hear more stuff - on demand. Google Glass and other smart glasses do all that as well, but I wonder if they'll soon go one better: they could extend or modify our unconscious mental processes. Consider, for example, some smart glasses set up to detect anything that looked like a spider in front of its camera, and overlay it with a red flashing box on the user's display if spotted. Now, I think this would make you obsessed with spiders. You'd notice them everywhere, and you'd find it hard to concentrate on anything else, so long as you were in front of one. You might like them or hate them, but you would be preoccupied with them, and if you were scared of them, this spider-focus would certainly make matters worse. Or again, your glasses could analyze the facial expressions of people you meet, perhaps displaying the results (85% happy, etc...) floating above their heads. But what if the algorithm was poorly calibrated, so that it wrongly said that most people were angry at you? How would that affect you over the long run...? I took these examples from recent psychological theories about the cognitive processes in spider phobia and depression (1,2). The original idea was that it's some largely unconscious processes in the mind that are (mis)directing attention. But it seems to me that technology could produce the same kind of effects. These examples are just for illustration. No-one's going to install an app that does such obvious harm. They show, however, the way in which smart glasses could - unlike existing technology - not just change what we do, but how we see, and therefore how we think.

    2 Free Articles Left

    Want it all? Get unlimited access when you subscribe.

    Subscribe

    Already a subscriber? Register or Log In

    Want unlimited access?

    Subscribe today and save 70%

    Subscribe

    Already a subscriber? Register or Log In