We have completed maintenance on DiscoverMagazine.com and action may be required on your account. Learn More

Jaron’s World: Computer Evolution

Most software stinks. It should learn from robots and bacteria.

By Jaron Lanier
Jun 27, 2007 5:00 AMNov 12, 2019 5:35 AM
computer_room.jpg
Image courtesy of USGS

Newsletter

Sign up for our email newsletter for the latest science news
 

There’s an intriguing new book out called Dreaming in Code by Scott Rosenberg (a cofounder of the Salon Web site), which centers on a group of engineers struggling to create a piece of personal productivity software called Chandler. I make an appearance in the tale, although I wasn’t involved with this particular project. The book’s title refers to a problem I used to have after intense periods of programming: I remember waking up to find I had been dreaming in computer code—eight-bit machine language, no less.

There wouldn’t be a story worthy of a whole book if Chandler came together easily and everyone went home happy. Indeed, Dreaming is an examination of the stressful mysteries of software. Why do some software projects sail to completion while so many others seem cursed? Why must software development be so difficult to plan?

These questions should concern everyone interested in science, not just programmers, because computer code is increasingly the language we use to describe and explore the dynamic aspects of reality that are too complicated to solve with equations. A comprehensive model of a biological cell, for instance, could lead to major new insights in biology and drug design. But how will we ever make such a model if the engineering of a straightforward thing like a personal productivity package confounds us?

For decades I have been chasing another kind of dream, that there will eventually be a less dismal way to think about software, and therefore about complexity. I have been calling this dream “phenotropics,” a word that roughly translates to “surfaces relating to each other.”

One way to understand phenotropics is to start by noticing that computer science has been divided by a fundamental schism. One side is characterized by precisely defined software structures. This is the approach to computers that requires you to make up a boundless number of silly names for abstract things, like the files on your hard drive. This was the only kind of computer science that was possible on the slow computers we were stuck with until fairly recently.

While advances in “high-level” languages (ones that use specialized syntax to carry out a lot of smaller, fussier commands—JavaScript, for example) have made this approach more human friendly over the years, the core of what we do when we program has barely changed. It still echoes the mathematical constructions of John von Neumann and Alan Turing from over a half century ago. While these math ideas are fundamental, there is no reason to believe they’re the best framework to use when creating a complicated program like a video game or modeling a complex scientific phenomenon.

This brings us to the other side of the schism. There is an emerging kind of programming that has been practiced by diverse people like robot builders, experimental user-interface designers, and machine-vision experts. These people had to find ways for a computer to interface with the physical world, and it turns out that doing so demands a very different, more adaptable approach.

Back in the 1950s and 1960s, when computer science was young, it was not clear that the two types of programming would be so distinct. There were occasional quixotic attempts to bridge the gap—for instance, by finding a compact set of rules to define the English language or the way a human brain recognizes an object by sight. Unfortunately, such rules don’t exist (see "Sing a Song of Evolution"). Instead, computer scientists who confronted the outside world had to develop new techniques that perform statistical analysis on large streams of data. These techniques are based on fuzziness instead of perfection. This is the approach that allows a rover to navigate across the surface of Mars without an exact map of what it is doing. You don’t have to name the shape of every hump of dust on the ground the way you have to name every file on a hard disk.

The core idea of phenotropics is that it might be possible to apply statistical techniques not just to robot navigation or machine vision but also to computer architecture and general programming. Right now, however, computer interiors are made of a huge number of logical modules that connect together through traditional, explicitly defined protocols, a very precise but rigid approach. The dark side of formal precision is that tiny changes can have random, even catastrophic effects. Flipping just one bit in a program might cause it to crash.

For a dramatic illustration of the limitations of current techniques, browse around on MySpace. You will see lots of pages with weird broken elements, like missing pictures or text and images wildly out of place. What happened is that the protocols connecting different elements together weren’t perfect. Probably they looked right initially, but some little detail changed and the mistake couldn’t be unwound easily enough for an amateur programmer to manage.

The phenotropic approach would be closer to what happens in biological evolution. If tiny flips in an organism’s genetic code frequently resulted in huge, unpredictable physical changes, evolution would cease. Instead there is an essential smoothness in the way organisms are related to their genes: A small change in DNA yields a small change in a creature—not always, but often enough that gradual evolution is possible. No wonder software engineers have such a hard time. When they do an experiment by changing a piece of code, the results they get are shockingly random. As a result, they generally cannot use code itself as a tool for learning about code.

Another place to find a helpful biological metaphor is in the human brain. The brain is made of specialized parts, like the visual cortex, the cerebellum, and the hippocampus. During fetal development, the many parts find each other to make an extraordinary number of connections, neuron reaching out to neuron. Each part forms itself so as to make the best use of the rest of the brain. Each border between regions of the brain can be thought of as a grand version of the sort of connection I’ve described between a computer and the outside world, based on an ever-improving approximation instead of a perilous reliance on perfection.

Suppose software could be made of modules that were responsible for identifying each other with pattern recognition. Then, perhaps, you could build a large software system that wouldn’t be vulnerable to endless unpredictable logic errors.

What would the modules be like? One idea is that each module could be a user interface, like the contents of a window on a Mac or Vista desktop. Here’s a bit of trivia: Andy Hertzfeld, who wrote much of the original Macintosh OS and is a major figure in Dreaming because of his work on Chandler, helped me try to realize an early experiment along these lines. It was called Embrace back then, in 1984, and it came together right after Andy quit Apple (when the Mac was released).

In the Embrace system, user interfaces could operate each other—one window manipulating another as if it were a human user. This might sound like a strange idea, and it was. A hidden digital character—like a figure in a video game that just isn’t animated to appear on-screen—was affixed to the back side of every window. This figure could be trained to operate other windows, each of which had an ability to do the same thing. In this way a whole software environment was made of nothing but user interfaces that could operate each other! Instead of a library of arithmetic routines, for instance, there was a graphical calculator gadget, and either a digital character or an actual human could use it to add numbers through the same interface.

Oddly enough, the Internet might be evolving to look a little like this old experiment. Existing Web pages are already being used by intrepid programmers as raw materials for new Web pages, called mashups. In my opinion, the biggest missed opportunity in the current wave of Web development is that mashups are glued together using the traditional sort of abstract programs. Now that machine vision and other pattern-based techniques are becoming reliable, it is finally conceivable for Web pages to use each other at the user-interface level, the same way that humans use them. That would be a great development: Then a mashup construction could become adaptive and escape the curse of random logic mistakes.

Perhaps something like the phenotropic idea will be realized out of the Web as it evolves, or perhaps it needs to be built in a lab first. Whether the old Embrace idea of “user interface as building block” is any good or not, it seems likely that pattern recognition will come to play a greater role in digital architecture.

I’ll leave you with a thought that haunts me. In my earlier days, when I experienced “dreaming in code,” I had a peculiar and profound experience from time to time. I would get a gut feeling that a program had suddenly achieved the state of being bug free. When I got that feeling, I was always right—as far as anyone could tell, at least. (As Turing proved, there is no general way to predict the outcome of running a given program; see "The Soul of the Machine".) The sensation was like the one I get when I understand a mathematical proof. It’s a sharp-edged sense of rightness that I wish everyone could feel. Of course, there was no way to know how reliable this sense really was. I could have easily had a lucky streak or fooled myself by forgetting the times the feeling was wrong.

This reminiscence can’t help but invoke Roger Penrose’s notion that the experience of “getting” a mathematical proof involves extraordinary neurological mechanisms—according to him, a quantum computation going on within the brain. I don’t accept that argument. What I gather instead is that approximate pattern recognition—the process that was taking place in my head during my programming epiphanies—can become very reliable at understanding a complex system. And that is one big reason why I am still chasing the dream of phenotropics.

1 free article left
Want More? Get unlimited access for as low as $1.99/month

Already a subscriber?

Register or Log In

1 free articleSubscribe
Discover Magazine Logo
Want more?

Keep reading for as low as $1.99!

Subscribe

Already a subscriber?

Register or Log In

More From Discover
Recommendations From Our Store
Shop Now
Stay Curious
Join
Our List

Sign up for our weekly science updates.

 
Subscribe
To The Magazine

Save up to 40% off the cover price when you subscribe to Discover magazine.

Copyright © 2024 Kalmbach Media Co.