Pages: pp. 4-6
The title of this month's column comes from Edward Bellamy's 1888 utopian novel, Looking Backward, 2000–1887. Bellamy's conceit was that of a Bostonian hypnotized in 1887 who awakens in 2000 to an America transformed by the genius of centralized government planning into a paradise of rich, contented, and homogenized citizens. In his predictions, Bellamy correctly identified the year 2000 as being in the 20th century (something most actual residents of the year 2000 seemed to have been mistaken about), but got just about everything else wrong. Besides his explicit racism and sexism and his implicit (although comfortable) enslavement of the citizenry in the national army of production, Bellamy described a world most optimized for stasis: manners remained at a 19th-century ideal (although women were now forward enough to mention they liked someone before a marriage proposal), Liszt or Chopin would have been comfortable with the music, and society was vastly richer through some set of inventions.
However, like the novelist who can't create a character who is a better writer than himself, Bellamy's vision lacked not only any interesting inventions but even a description of the mechanisms that would encourage them. His most advanced technical creation seemed to be an early, four-station version of cable radio, supplied by live orchestras. Not only did Bellamy fail to predict streaming audio over the Internet, he also missed the proximate radio (1895) and the prior phonograph (1877). Like Webvan customers, Bellamy's citizen-worker-soldiers had free grocery delivery, but they had to place their orders at district stores, where the paperwork was pneumatically transferred to the central distribution facility, eschewing the existing telephone (1876) or even the ancient fax (1842).
Bellamy's novel came to mind last week because I was reminded of one of the (many) follies of my middle age, The Arachnoid Tourist. Those of you who have been reading IC since the last century might remember the Tourist, whose (initial, at least) goal was to visit and review Web sites.
In 2006, this idea seems ludicrous enough, but in a conference room in the IEEE Computer Society publications office in 1996, a bunch of academics and researchers flailing about how to fill a publication on "Internet computing" thought it worthwhile enough to try. Although Bellamy's world didn't change much in the 113 years before 2000, the Web in 1996 was an exotic thing. Most corporations, for example, were only beginning to get the idea of needing a Web presence and were groping their way toward figuring out what to make of this Internet stuff.
Strangely enough, the mere activity of pointing to interesting Web sites was itself one of perilous prediction.
Overcoming my trepidations, I recently revisited that first year's (1997) columns. Feniosky Peña-Mora and I crawled to 37 sites that year, a collection of academic, community, and commercial demos, information hubs, essays, and running commercial applications. I would characterize only a few (10) as still intact nine years later. Most likely to survive are essayists, who (like this author) are most scrupulous about preserving their words for posterity. At the opposite end of that spectrum, most likely to be expired or moribund, are academic research sites, the funding grants expired and the graduate students scattered.
Most interesting are the fates of the commercial ventures. Sites from large computer companies (Sun, IBM) are on course, although Microsoft's Java support seems to have disappeared. A pair of early commercial "search engines" (that is, online paper catalogs), faulted for being too much like print editions, have morphed into searchable online directories. Like the Yellow Pages, they've accumulated advertisers, not a general search of everything available. Advertisers can claim specific attributes (for example, ISO 9002 compliance) from a small set, and you can search with respect to these attributes. But, like the physical directory whose local billboard has been proclaiming that it's the most up-to-date printed directory available (with nary a mention of the temporal properties of online directories), I can't tell if these sites are actually useful to anyone.
Many performance concerns of 1997 (mapping Web traffic, caching, and network monitoring) have recessed into the woodwork — these technologies work so well that we hardly think of them any longer. The Internet of 1997 was unsure if the expanding traffic load would lead to collapse. The Internet of 2006 debates whether differential pricing will hasten or impede the seemingly certain-to-appear living room virtual reality.
Collaborative information sharing has blossomed, with sites featuring consumer reviews, ratings, virtual communities, and blogs. As a measure of the structure of the collaborative times, my favorite 1997 site — contributed maps of subway systems — has been replaced by an expectation that any self-respecting subway must have its own Web site. However, it's an interesting economic metric that although many sites are happy to provide driving directions, integrated public transportation routing (including bus lines) remains rare. This reflects two realities: people who ride the bus don't have the economic allure for advertisers of those in their cars, and routing through a network of frequently stopping, imperfectly periodic bus lines and station-centric trains, mixed with the ability to walk (a little) is a far more difficult combinatoric problem than driving.
Most interesting are the precursors of modern business models. The online map creation program still works, although it pales in comparison with the widely available online satellite imagery. The Tourist correctly recognized the importance of online auctions and information push technologies, but the 1997 pioneers in these technologies lie fallen, succeeded by the likes of eBay and RSS feeds.
By 1998, choosing "interesting" Web sites had become too amorphous an activity — there were too many of them, showing too much variety of content and aims. The Spider slowly morphed into a tutorial/survey model, using Web sites as pointers for learning about something in particular. It was a far more sustainable model, though even that model wasn't sustainable to the 21 st century. Those 1997 columns are an interesting historical footnote — not as embarrassing as I had feared nor nearly as inaccurate as Bellamy, but hardly sufficiently precise for genuine technology bets. It's still true, as Niels Bohr observed, that prediction is very difficult, especially about the future.
Vinton G. Cerf is vice president and chief Internet evangelist for Google. Widely known as one of the "Fathers of the Internet," he is the codesigner of the TCP/IP protocols and the Internet's architecture. Cerf has a BS in mathematics from Stanford University and an MS and PhD in computer science from the University of California, Los Angeles. He serves as chairman of the board of the Internet Corporation for Assigned Names and Numbers (ICANN) and is a fellow of the IEEE, the ACM, the American Association for the Advancement of Science, the American Academy of Arts and Sciences, the International Engineering Consortium, the Computer History Museum, and the National Academy of Engineering. Contact him at firstname.lastname@example.org.
Amit Sheth is a professor of computer science at the University of Georgia, where he started the Large Scale Distributed Systems lab in 1994. He has served in R&D groups at Bellcore, Unisys, and Honeywell, and he founded Taalee (now Semagix), a company based on semantic technologies. His research interests include information integration, workflow management, Semantic Web, and Web and knowledge services and processes. Sheth has a BE from the Birla Institute of Technology and Science, Pilani, India, and an MS and a PhD in computer and information science from the Ohio State University. He is a fellow of the IEEE, serves on several journal editorial boards, and is the editor-in-chief of the International Journal on Semantic Web and Information Systems. Contact him at email@example.com.