Evolving A Conscious Machine

Some computer scientists think that by letting chips build themselves, the chips will turn out to be stunninglyefficient, complex, effective, and weird—kind of like our brains.

By Gary Taubes
Jun 1, 1998 5:00 AMNov 12, 2019 6:47 AM

Newsletter

Sign up for our email newsletter for the latest science news
 

There are some subjects of discourse, politics probably foremost among them, that are best discussed in a pub, where any seemingly definitive judgment is unlikely to be taken too seriously. One such subject is computer consciousness: whether a suitably intelligent computer can achieve a sense of I, as can, allegedly, a suitably intelligent human. This is the topic on a rainy Saturday afternoon in November, at a pub called The Swan in Falmer, in a village outside Brighton, England, about an hour’s train ride south of London. The pub is a few hundred yards (or rather, meters) from the campus of the University of Sussex, home of the Center for Computational Neuroscience and Robotics.

My companions are two Sussex computer scientists, Inman Harvey (Inman is my first name and Harvey is my last, which confuses Americans quite a lot) and Adrian Thompson. Harvey is bearded, heavyset, and pushing 50. A former student of philosophy, he ran an import-export business out of Afghanistan for two decades before returning to academia to become a computer scientist, or, as he calls himself, an evolutionary roboticist. Thompson is in his late twenties and has been programming since he was an adolescent, although he was not the archetypal computer nerd, he says, because he wasn’t interested in computer games, only computers. He arrived at the pub in a flannel shirt and mud-splattered jeans, since his car is in disrepair and he had to walk over the South Downs. The three of us are sitting in a corner of the pub, eating lunch and drinking the local ale, while Harvey, who has just written a chapter for a textbook on evolving robot consciousness, holds forth on the computer sense of self.

The gist of Harvey’s argument seems to be that we cannot assess such concepts as consciousness free of our prejudices, our belief, based on past experience, that humans are conscious but machines and other inanimate objects are not. To support his point, Harvey suggests that if you were walking across a road and saw a car-size block of stone tumbling down a hillside in your direction, you would give it considerably more leeway, with greater urgency, than you would a car-size car driving toward you at the same speed. If you see a car coming toward you, he explains, you assume it has a human being in it who wants to avoid you. You may be a bit careful about stepping right in front of it, but you don’t worry about it veering toward you.

So it is with humans and machines. We think humans are conscious because we believe in our own sense of self. What happens in our brains is mysterious, even incomprehensible, and the mystery seems to leave room for the glamour of consciousness. However, machines or robots—Harvey doesn’t like the word computer, because its technical meaning is a machine that uses symbols to compute, which is nothing like the way human or animal brains work—are typically wired by design, with circuit diagrams that can be reduced to their individual logic elements and software programs that can be analyzed and understood, so it’s hard to imagine how a complex thinking machine could have the kind of conscious I we do. But what if you could evolve a robot, made of the usual silicon, wire, and transistors, that appeared to act consciously, with thought processes as unfathomable as our own? With evolved robots, says Harvey, you can’t analyze how they work, and so you’re forced much more easily toward taking the stance that will, at the end of the day, attribute consciousness to a robot. That’s the way it goes.

Thompson isn’t buying this. Indeed, he doesn’t even want to discuss it. He eats his fish and chips dutifully and seems to be wishing the conversation would meander elsewhere. His attitude is mildly ironic because, for the past few years, Thompson has been playing with computers in which the hardware evolves to solve problems, rather the way our own neurons evolved to solve problems and to contemplate ourselves. He is one of the founding members of a field of research known as evolvable hardware or evolutionary electronics. Thompson uses a type of silicon processor that can change its wiring in a few billionths of a second, taking on a new configuration. He gives the processor a task to solve: for instance, distinguishing between a human voice saying stop or go. Each configuration of the wiring is graded on how well it did, and then those configurations that scored high are mated together to form new circuit configurations. Since all this manipulation is carried out electronically, the wiring of the processor can evolve for thousands of generations, eventually becoming a circuit that Thompson describes as flabbergastingly efficient at solving the task.

0 free articles left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

0 free articlesSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

Stay Curious

Sign up for our weekly newsletter and unlock one more article for free.

 

View our Privacy Policy


Want more?
Keep reading for as low as $1.99!


Log In or Register

Already a subscriber?
Find my Subscription

More From Discover
Recommendations From Our Store
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.