Some technological revolutions arrive as revelation. You hear a human voice wafting out from a rotating plastic disk or see a moving train projected onto a screen, and you sense instantly that the world has changed. For many of us, our first encounter with the World Wide Web a decade ago was one of those transformative experiences: You clicked on a word on the screen, and instantly you were transported to some other page that was served up from a computer located somewhere else, across the planet perhaps. After you followed that first hyperlink, you knew the universe of information would never be the same.
Other revolutions creep up with more subtlety, built of tweaks and minor advances, not radical breakthroughs. E-mail took decades to gestate, but now many of us can’t imagine life without it. There’s a comparable quiet revolution under way right now, one that is likely to fundamentally transform the way we use the Web in the coming years. The changes are technical and involve thousands of individual programmers, dozens of start-ups, and a few of the largest software companies in the world. The result is the equivalent of a massive software upgrade for the entire Web, what some commentators have taken to calling Web 2.0. Essentially, the Web is shifting from an international library of interlinked pages to an information ecosystem, where data circulate like nutrients in a rain forest.
Part of the beauty and power of the original Web lay in its simplicity: Web sites were made up of pages, each of which could contain text and images. Those pages were able to connect to other information on the Web through links. If you were maintaining a Web site about poodles and stumbled across a promising breeder’s home page, you could link to the information on that page by inserting a few simple lines of code. From that point on, your site was connected to that other page, and subsequent visitors to your site could follow that connection with a single mouse click. In some basic sense, those two pages of data were interacting with each other, but the exchange between them was rudimentary.
Now consider how a group of poodle experts might use the Web 2.0. One of them subscribes to a virtual clipping service offered by Google News; she instructs the service to scan thousands of news outlets for any articles that mention the word poodle and to send her an e-mail alert when one of them comes down the wire. One morning, she finds a link to a review of a new book about miniature poodles in her in-box. She follows the link to the original article, and using a standard blogging tool like TypePad or Blogger, she posts a quick summary of the review and links to the Amazon page for the book from her blog.
Within a few hours of her publishing the note about the new book, a service called Technorati scans her Web site and notices that she has added a link to a book listed on Amazon. You can think of Technorati as the Google of the blog world, constantly analyzing the latest blog posts for interesting new developments. One of the features it offers is a frequently updated list of the most talked-about books in the blog world. If Technorati stumbles across another handful of links to that same poodle book within a few hours, the poodle book itself might show up on the hot books list.
After our poodle expert posts her blog entry, she takes another few seconds to categorize it, using an ingenious service called del.icio.us, which tags it with her content-specific title, like “miniature poodles,” or “dog breeding.” She does this for her own personal use—del.icio.us lets her see in a glance all the pages she has classified with a specific tag—but the service also has a broader social function; tags are visible to other users as well. Our poodle expert can also see all the pages that other users have associated with dog breeding. It’s a little like creating a manila folder for a particular topic, and every time you pick it up, you find new articles supplied by strangers all across the Web.
Del.icio.us’s creators call the program a social bookmarking service, and one of its key functions is to connect people as readily as it connects data. When our poodle lover checks in on the dog-breeding tag, she notices that another del.icio.us user has been adding interesting links to the category over the past few months. She drops him an e-mail and invites him to join a small community of poodle lovers using Yahoo’s My Web service. From that point on, anytime she discovers a new poodle-related page, he’ll immediately receive a notification about it, along with the rest of her poodle community, either via e-mail or instant message.
Now stop and think about how different this chain of events is from the traditional Web mode of following simple links between static pages. One small piece of new information—a review of a book about poodles—flows through an entire system of reuse and appropriation within hours. The initial information value of the review remains: It’s an assessment of a new book, no different from the reviews that appear in traditional publications. But as it ventures through the food chain of the new Web, it takes on new forms of value: One service uses it to help evaluate the books with the most buzz; another uses it to build a classification schema for the entire Web; another uses it as a way of forming new communities of like-minded people. Some of this information exchange happens on traditional Web pages, but it also leaks out into other applications: e-mail clients, instant-messenger programs.
The difference between this Web 2.0 model and the previous one is directly equivalent to the difference between a rain forest and a desert. One of the primary reasons we value tropical rain forests is because they waste so little of the energy supplied by the sun while running massive nutrient cycles. Most of the solar energy that saturates desert environments gets lost, assimilated by the few plants that can survive in such a hostile climate. Those plants pass on enough energy to sustain a limited number of insects, which in turn supply food for the occasional reptile or bird, all of which ultimately feed the bacteria. But most of the energy is lost.
A rain forest, on the other hand, is such an efficient system for using energy because there are so many organisms exploiting every tiny niche of the nutrient cycle. We value the diversity of the ecosystem not just as a quaint case of biological multiculturalism but because the system itself does a brilliant job of capturing the energy that flows through it. Efficiency is one of the reasons that clearing rain forests is shortsighted: The nutrient cycles in rain forest ecosystems are so tight that the soil is usually very poor for farming. All the available energy has been captured on the way down to the earth.
Think of information as the energy of the Web’s ecosystem. Those Web 1.0 pages with their crude hyperlinks are like the sun’s rays falling on a desert. A few stragglers are lucky enough to stumble across them, and thus some of that information might get reused if one then decides to e-mail the URL to a friend or to quote from it on another page. But most of the information goes to waste. In the Web 2.0 model, we have thousands of services scrutinizing each new piece of information online, grabbing interesting bits, remixing them in new ways, and passing them along to other services. Each new addition to the mix can be exploited in countless new ways, both by human bloggers and by the software programs that track changes in the overall state of the Web. Information in this new model is analyzed, repackaged, digested, and passed on down to the next link in the chain. It flows.
This is good news whether we love poodles or not, but it’s also good news economically because the diversity of the ecosystem makes it a fertile environment for small players. You don’t have to dominate the food chain to get by in the Web world; you can find a productive niche and thrive, partially because you’re building on the information value created by the rest of the Web. Technorati and del.icio.us both began as small projects created by single programmers. They don’t need huge staffs because they’re capturing the information supplied by the countless number of surfers who use their services, and they’re building on other tools created by other people, whether they work in a home office or in a vast international corporation like Google. All of which makes this the most exciting time to be on the Web since the glory days in the mid-1990s. And the revelations aren’t about to stop. As we figure out new ways to expand the complex information food chains of Web 2.0, we will see even more innovation in the coming years. Welcome to the jungle.