IEEE Internet Computing

George Gilder: On the Bandwidth of Plenty

"For the next 30 years, bandwidth is going to be the fastest growing resource, and we will use it like we have used the transistor for the past 30 years." George Gilder

Internet Computing's editor in chief Charles Petrie caught up with George Gilder at his office in the Berkshire Mountains of Massachusetts on December 9, 1996 to talk about technology trends in the industry. Gilder is well known for his wide-ranging analysis of high technology. In his best-selling work, Microcosm (1989), he explored the quantum roots of the new electronic technologies. His forthcoming book, Telecosm, serialized over the past three years in Forbes ASAP (where Gilder was a founding editor), is an analysis of the opportunities offered by Internet technology to create wealth and enhance culture.

Gilder's current focus is the launch in January 1997 of Gilder Technology Report in a joint venture with Forbes. Here Gilder's aim is to track key technologies that are opening the Net to more and more people. To Gilder these are computer architecture--bandwidth will fundamentally change computers--and the myriad factors that will open up bandwidth. In the near term, these include such things as increasing backbone capacity, cable modems, the telecom digital subscriber line (DSL) technologies, satellites, and advances in digital wireless. Eventually, Gilder believes, optical fiber networks will radically optimize transmission times.

Gilder spends most of his time these days talking with engineers, whom he views with admiration as prime creators of wealth in the post-industrial Information Age. The fruits of these conversations will appear in the Gilder Technology Report and its accompanying Web site. Current plans include an interactive lab at the site so that researchers can take advantage of Gilder's data.

Petrie: Today I'd like to focus on your claim that bandwidth will be available in abundance, that it will be the transistor of the early 21st century. Let's begin with wireless technologies, which I know you think will offer large gains. You've been following the war between CDMA and TDMA for dominance of the digital cellular and PCS markets (see sidebar, PCS and the Wireless Market). Tell me what's happening in this arena.

Gilder: CDMA was launched in 1989 by Qualcomm, led by Andrew Viterbi and Irwin Jacobs. It represents one of the many technologies (Java is another) that baffle the backers of its rivals by prevailing against all odds because it fits with the dynamics of the Internet. CDMA is a direct-sequence spread-spectrum solution that attracted me because of its elegance for data bandwidth on demand and its use of information theory: the concept of broadband noise as the highest density source of information. I've been pushing it since 1989. It accords with Claude Shannon's thesis that digital bandwidth can serve as a replacement for both power and switching. This trade-off will become more and more attractive as battery-powered mobile computers move up spectrum where bandwidth is plentiful.

But it's a war out there! With 25 million phones globally, GSM is the only successful industrial policy of the European Economic Community, period. The EEC contrived GSM in response to the proliferation of analog standards in Europe, which prevented roaming. There was a different analog standard in each country. By contrast, we had AMPS (analog mobile phone service) and a coherent analog mobile phone system. So the EEC mandated GSM, a very conservative standard with 200-KHz channels that achieved only a threefold advantage over analog. Nonetheless, it allowed Europe to jump ahead to digital before we did. When Qualcomm introduced CDMA, it precipitated one of the most dramatic standards battles ever.

Petrie: Why is it so intense?

Gilder: Even though it would seem today that, with 25 million phones, GSM pretty much prevails, in fact, the potential for wireless local loops, wireless Internet access, and all the other applications of PCS is so immense that 25 million represents only the beginning of the game. And although GSM certainly is a viable technology, it looks like it will not prevail as the dominant global standard. Suddenly CDMA is taking off like a rocket.

For some reason people were unusually intense in opposition to CDMA technology. Bruce Lusignan, a brilliant professor of electrical engineering at Stanford, said that CDMA, as Qualcomm described it, violates the laws of physics--and this was quoted over and over again. So that laws of physics--laws of God, if you will--were involved in this debate! And because it was said to violate the laws of physics, lots of people jumped to the conclusion that Irwin Jacobs and Andrew Viterbi (of the Viterbi algorithm fame) were pushing a technology scam!

Petrie: So this was the cold fusion of telephony.

Gilder: Yes, to old analog hands CDMA seemed too good to be true. It exploits the special advantages of digital, which unlike analog improves by the square of the bandwidth, and requires signal-to-noise ratios 40 decibels lower. The same codes that spread out the signal are inverted and used to despread it at the receiver. The signal pops out above the background noise level, and the real noise spikes and ingress are spread, and sink below the background noise level. It's magic if you don't get it. I spent a fascinating day with Lusignan, and while he started by trying to persuade me that theoretically no gains are realized by moving from frequencies or time slots to codes, he ended up by arguing quite earnestly that there was no reason to go to digital. He saw analog as elegant, efficient, convenient, robust, and just great, and as incorporating a whole array of his patents.

When I discovered that the most sophisticated opponent of CDMA was really opposing the whole digital revolution, it seemed to me that the case was collapsing. I then went to Thomas Cover, the leading information-theorist at Stanford. Cover likes CDMA . . . but confirmed that in theory, a time-division system would have just as much bandwidth as a code-division system.

Petrie: So does CDMA give real gains?

Gilder: Yes, it works in practice but not in theory. Lusignan of course is right that in Shannon's terms it does not matter how you slice up the bandwidth; the limit will remain the same. But CDMA's advantages derive from the efficiencies of digital, the exponential advance of microchips, and the decline of time-division multiplexing for all data applications. Whether in wires or the air, TDM is failing for data because it does not correspond to the bursty flows of bits--some time slots are empty and others are flooded. . . . If TDM didn't work efficiently for data in wires, how was it going to work in wireless?

When I saw that, I knew CDMA would prevail, because obviously Internet data would be an absolutely essential application of any new-generation wireless technology. The key advantage of CDMA is it uses all the spectrum all the time, so that it can accommodate bursts and it can accommodate bandwidth on demand. Also, the people who said it wouldn't work said it was too complex. But in digital semiconductors, the complexity sinks into the chip and becomes simple. And so the fact that it was too complex in 1989 or 1990 was not relevant to 1995 or 1996, when you could put the whole thing on a single ASIC, as Qualcomm is now doing.

Petrie: So is CDMA working?

Gilder: It had its problems in the beginning. Managing all the codes and power levels is very complex with CDMA. All signals have to be received at about the same power or the system doesn't work. Power was going to be a critical issue anyway, because with any wireless application, battery issues are central. The CDMA people had to solve the power control issue, and they did. Lo and behold, it turns out that power is a lot simpler to control than time slots and frequency channels. As a result, CDMA uses between one-tenth and one-thousandth the average transmit power of ordinary AMPS or GSM. This is radically more efficient, and it's another huge win for CDMA.

Petrie: Forget new battery technology, we just need a new technology that doesn't require so much power.

Gilder: That's right; that is the solution. This is a point I've made when I talk about bandwidth as a replacement for power and as a replacement for excessive complexity of switching. You can use bandwidth as a substitute for the other metrics. For the next 30 years bandwidth is going to be the fastest growing resource, and we will use it like we have used the transistor for the past 30 years.

Well, what finally happened is that the Koreans, of all people, proved that CDMA works robustly and well. After a lot of very faltering experimental deployments, it finally unquestionably is working. At this point 700,000 CDMA phones have been sold in Seoul, and 550,000 subscribers are being served by one 1.24 MHz carrier. It just suddenly has taken off! In Hong Kong, Hutchison reports that CDMA gives better performance than GSM and uses half as many base stations. In Japan DDI (the MCI of Japan) is adopting it. Wireless local loop applications are also taking off, and of course they are vital in the Third World.

In a really big upset, the CDMA PCS system of PrimeCo., a large group of phone companies led by AirTouch Cellular, took off before the TDMA PCS system of McCaw (now AT&T Wireless Division)--and McCaw had years and years of advantage. PrimeCo. did their big multi-city launch a couple of weeks ago. And Sprint PCS Spectrum has also begun to launch a CDMA PCS system nationwide. NextWave , which wants to wholesale capacity, has adopted CDMA and raised $500 million to deploy it in a year or so. So it's just taken off all over the place, and has more momentum now than GSM. I think it's over the hump. And it's exciting when you get a winner like this--a winner for mobile Internet bandwidth.

Petrie: Let's turn to some topics you've covered in recent Forbes articles. And of course one of the hot topics right now, given it's the end of the year, is your argument with Metcalfe, who said he's going to eat his column if there's not a breakdown at the Ethernet many times greater than only 50,000 lines down for more than an hour.

Gilder: We discussed that and I recommended he use wasabi to flavor it. It's pretty good on anything inedible.

Petrie: I see! As I understand it Metcalfe had three main arguments: first, that current data showed collapse was inevitable; second, that intranets were sucking up resources while adding a load to the Net; and third, that the Net needed to be managed so that investors could realize returns. You never said much about his third argument.

Gilder: That's because I'm pretty much in agreement with it, although unlike Metcalfe I don't think the Net has to be centrally engineered. Each portion of it is elaborately engineered. That's what Cisco Systems is doing. That's what all these people are doing in the network equipment industry. They're engineering various portions of the Net. I suppose what Metcalfe is implying is that for this kind of equipment the process of somewhat diffused engineering governed by the market is distorted by the pricing anomalies that prevail throughout the phone system. And I agree that these anomalies are a real problem. That's what the Telecommunications Reform Act of 1996 addressed and tried to alleviate, but failed to accomplish because of foolish errors made by the FCC in interpreting it.

As I understand it, under a concept called TELRIC (total estimated locally regulated incremental costs), the regional Bell operating companies (RBOCs) have to charge incremental costs to their competitors for upgrades of the network. In other words, if the RBOCs install a lot more fiber and huge new switches and extend the bandwidth of their networks, they will have to then lease the additional capacity to their competitors at cost. Of course this is preposterous. It means no RBOC will ever upgrade its network again, and to the extent that this kind of insanity prevails, the Internet will have to move off the local phone networks--and that will be a real problem.

There are solutions to it, though. There are ways of circumventing the central office switch and running right through the central office without connecting to the 5ESS or DMS-500 switch that runs it, thus avoiding some of the requirements that the new law imposes. But it's still a very unappetizing picture for the RBOCs, and I think this effort to create a level playing field is quixotic and stupid. In law schools they can talk about level playing fields, but out there in engineering there are no level playing fields. The RBOCs have some advantages, the long-distance companies have other advantages, so let them compete. This idea that the 65 million tons of copper wire commanded by the RBOCs is an insurmountable barrier to entry for other companies is nonsense. It's actually a barrier to entry for the RBOCs, a copper cage that keeps them out of the huge new markets for broadband Internet. There are plenty of ways to bypass them with wireless. So if we have to bypass them with wireless, and cable, and satellite, we will, and that is part of my answer to Bob Metcalfe.

Petrie: So you believe that regulations are inhibiting the law of the telecosm.

Gilder: Oh yes. What Roger McNamee, a principal at Integral Capital Partners, calls Moron's Law can always scotch Moore's Law and Metcalfe's Law for awhile. You might expect the RBOCs to be poised to exploit this immense market opening before them at the very moment of deregulation. Instead, they are in Washington litigating to prevent deregulation, because they're more fearful of losing their voice monopoly in the local area than they are hopeful of exploiting these huge new Internet markets.

Petrie: The long-distance telecoms want to put out more bandwidth, right?

Gilder: Yes, the interexchange carriers have an incentive to have the best possible network for the corporate customers that they serve around the world, and increasingly this means networks that gain their ultimate value from being linked to the Internet. So, yes, the long-distance carriers have an adequate incentive to deploy bandwidth. MCI has indeed not just doubled the backbone bandwidth this year; it moved from 45 megabits per second to 622 megabits per second. 622 is OC-12.

Petrie: Is this regulatory situation good for the cable companies?

Gilder: I think so. I'm an advocate of cable. It's been one area where I've apparently been wrong, so far. The stock market says I am wrong. But I think I will be saved again by CDMA.

Petrie: Are the cable companies using CDMA over coax?

Gilder: Not now, but they will soon. CDMA is great for a noisy channel, and cable upstream is the noisiest channel there is. @Home (which offers high-bandwidth Internet access through coaxial cable lines to the home) is TCI's effort to transcend this problem.

Petrie: Aren't they supplying the big-pipe, little-pipe model?

Gilder: They were. But in order to have a good data solution for home offices and small businesses, they now seem to have gone with the LANCity technology. It was originally a Digital project with LANCity to do Ethernet over cable through a metropolitan area. And the LANCity solution is a two-way, symmetrical, 10-megabit-a-second system.

Petrie: What is the secret to making the two-way work?

Gilder: I believe that ultimately all the cable companies will turn to CDMA for two-way technology. A Silicon Valley company called Terayon, backed by Cisco Systems, uses a CDMA variant to get 60 megabits a second of bandwidth out of the bottom 40 megahertz of a cable line, which are so noisy that today they are hardly used at all. Terayon will save the cable industry as much as $20 billion by allowing them to use current plant, without upgrades, for broadband Internet access. As usual, everyone is now in the CDMA denial stage--it violates the laws of physics, and so on--but sooner or later everybody is going to go to CDMA to harvest the lower 40.

The other available area for digital is above either 500 MHz or 750 MHz, depending on the quality of the system. Here you can devote some 300 MHz to digital communication that escapes the huge mass of competing noise that applies to the "low-split" channel. So the low-split channel has less bandwidth and involves lots of digital tricks and error-correction and compression, and the high-split channel is really the ultimate broadband solution, but it involves costly upgrades. The spectrum space above 500 MHz is available now in maybe 20 percent of all cable plants, and it's expected to be in a third soon, although TCI because of financial problems has recently cut back drastically on its upgrades, thereby creating a huge opportunity for Terayon in the low-split space.

Petrie: So you're saying there's not any one solution?

Gilder: That's right. Cable at best currently connects to 60+ percent of US homes, roughly two-thirds. Of those, 20­30 percent are upgraded suitably. Perhaps many of the rest can use CDMA as it rolls out. So let's say that 40 percent of the total homes can have broadband connections provided by cable, reasonably speaking, over the next three or four years. Cable is then contributing a lot of bandwidth, particularly to up-market households, which are those most likely to want broadband Internet connections anyway. The technology has to take off with the top 10 percent of homes.

Petrie: But doesn't that turn the cable companies into ISP monopolies?

Gilder: I don't think monopolies, because the RBOCs as well as MCI, Sprint , AT&T, and hundreds of others are all going into the ISP business.

Petrie: But if the cable company owns the cable network, which is the really high-bandwidth entrée into homes, aren't they the ISP of choice?

Gilder: They may be the ISP of choice, great, until they really overcharge you or start supplying bad service or whatever, at which point you use a direct TV satellite dish on your roof connected to some upstream connection through the phone network or wireless cable, as they call it, at 18 to 28 to 38 GHz spectrum, which is increasingly being exploited and demonstrated for two-way communications. There just isn't going to be one monolithic solution. The lawyers in Washington believe that unless the government regulates every portion of this process, of the transition to a new, broadband Internet, that some vast, new monopoly is going to emerge. I think the technology is moving too fast for this; the old telephone model simply does not apply anymore. The model now is the dynamic computer-industry model, where radio technology is coming on, infrared technology, broadband, select and switch technology, satellite channels, DSL data-compression technology, wireless technology, and things we haven't even thought of yet are coming on.

Now you arrive at the SeaTac airport and Metricom is selling wireless Internet access at 28.8 kilobits per second all over Seattle. Metricom has around 20 networks installed at university and corporate campuses--I think Cisco Systems, Sun, and Stanford were among the first three to use the system--and they are in San Francisco and Washington, DC. This is exciting. If your notebook can get 28.8, why the heck is the phone company trying to sell you 28.8 over wires and claiming it's wonderful technology? Not to mention putting forward ISDN as the solution of the future.

But I haven't heard how well Metricom is delivering on its promise of wireless access. It ultimately will have to compete with the CDMA PCS companies.

Petrie: If there are all of these new technologies, and bandwidth is, as you claim, doubling every year . . .

Gilder: I think it's going to double more often than every 12 months.

Petrie: . . . why are we having brown-outs?

Gilder: Because Internet traffic rose 16-fold in 1995. Then, in 1986 after Metcalfe made his big publicity campaign about an Internet crash, between August and the end of November Internet traffic rose another 54 percent. And it probably rose even more, because that figure applies to the National Access Points (NAPS) only. Meanwhile, some of the Cisco Systems routers were malfunctioning. However, recent data reported by Merit, which Metcalfe stresses, shows between a 70­90 percent drop in router instability at all the NAPs from around September 1 through the end of December. This improvement, if the data proves out, is especially dramatic in the face of the 54 percent rise in traffic during this time.

Petrie: But your law of the telecosm says that bandwidth capacity will always exceed the traffic, because as new nodes are added on they add the resources necessary to carry the traffic.

Gilder: That's true, it's just there are sticky points, there are snags and glitches in the marketplace for bandwidth. And that has two different effects. The first is that extension of bandwidth will be very lumpy. There will not be a kind of linear, Moore's Law revelation of bandwidth. Instead you have a new move of deregulation that results in a huge increment of bandwidth being deployed. One such huge increment is satellite, which I think is going to be available for the Internet in the next year. One giant digital satellite can transmit downstream about 270 terabytes a month, which is the total capacity of all the US Internet traffic, at least up until about August. Traffic through the NAPs for the month of August was about 270 terabytes.

Petrie: Do you predict that ISPs next year will be turning to satellite service and bypassing the RBOCs?

Gilder: Well it's a complicated engineering problem to solve. Hughes has a solution in DirecPC, which offers Internet access via satellite network to your PC. They devote a couple of transponders to Internet access, I believe, on their Galaxy 5 satellite, which is mostly devoted to their VSAT (very small aperture terminals) network that offers private corporate connections for high-speed data transmission. At present they don't have much incentive to expand their Internet offering, because it competes with VSAT communications, for which they get compensated much better. But by the time your magazine comes out, they should have shifted their Internet offering from a VSAT satellite to one of the DBS (direct broadcast satellite) TV satellites (although I hear they are foolishly reconsidering this move). When they do move their Internet channels to DBS, rather than competing with the VSAT network, they'll be competing with another cable time-shifted channel. I think the Internet's steadily going to win that competition. So the satellite people, who are launching at least downstream bandwidth all over the place, will bring major relief to Internet traffic.

Another point about bandwidth is the current abuse of it for applications like downloading software upgrades, where companies like Netscape and Microsoft are making millions of point-to-point connections to the various Microsoft and Netscape servers. Why use Internet point-to-point bandwidth for what is inherently a broadcast-and-select application? If you broadcast the software upgrades to everybody at once, people who want to decrypt them can decide in their leisure to decrypt a particular program, and pay for it if there is a charge involved. And that just makes sense. You're using a broadcast-and-select system to do broadcast-and-select applications, and you use the point-to-point resources for point-to-point uses.

Petrie: But now you're also pointing out that bandwidth is increasing faster than the switches. So doesn't that bring us back to cheap wires essentially in the bandwidth? If so, wouldn't that argue for select and switch rather than broadcast and select?

Gilder: We are also getting cheap switches distributed everywhere around the world. But cheap bandwidth means that storage and switching will migrate to the optimal point. Moore's Law prevails here. These aren't absolute binary issues, however. You don't have all broadcast and select with no switches, or all switches with no broadcast functions. You have a mixture of it, which as bandwidth expands allows an optimal mix.

Remember that the fibersphere I've written a lot about is based on wavelength-division multiplexing of tremendous amounts of bandwidth, which can serve as a substitute for switches with all optical repeaters. And by the way they've just developed fluoro-zirconate repeaters that can handle the whole bandwidth of fiber. I don't know whether this technology will actually prove out or not, but for the first time they have demonstrable all-optical amplifiers that can handle the entire intrinsic bandwidth of fiber, which is quite an amazing development in just a year or two. The erbium-doped fiber amplifiers top out at 4.5 terahertz, so they can't accommodate the potentially 25­75 terahertz that every fiber, theoretically, could hold.

But, to return to your original question, there's all sorts of possibilities for very broadband fiber communication, and switching every little message within that bandwidth is going to be incredibly cumbersome, so that you really want some kind of more passive, broadcast-kind of technology that takes advantage of the hugely growing capacity in the terminals. Moore's Law increases in the terminals. It makes more sense to have all these people who have virtual supercomputers on their desks, or in their pockets, or wherever they are, find what they're looking for rather than to have humongous switches all over the place selecting and sorting out every particular message.

Petrie: It's not because switching is cumbersome, because as you say, now we can sink the complexity into the silicon. The issue is that switches are cheap, and so since they can be distributed . . .

Gilder: Right, that's enough.

Petrie: But is broadcast a better use of bandwidth?

Gilder: Well it's a better use of certain kinds of bandwidth. It's a better use of satellite bandwidth, it can be a good use of all-optical networks. Paul Green at IBM estimates that ultimately you'll have 10,000 different gigabit-per-second bitstreams running down a single fiber thread. They can be multiprotocol, each can be different, and they can be passively routed by passive optics. Now that kind of technology is very suitable for various broadcast applications. You go into that bunch of the 10,000 bitstreams that interest you and sort them yourself. I believe this will be a major way that information is distributed and communicated in the future. This is the fibersphere, where you tune into a frequency the way you currently tune into a radio frequency in the air. In response to Metcalfe, though, the point I want to stress is that none of these systems are going to be universally dominant. You're not going to have a new public switched telephone infrastructure that is homogeneous around the world; you're going to have lots of different forms of bandwidth competing, at least for the foreseeable future, and it's out of this competition that the solution to the Internet bandwidth crunch will emerge.

Petrie: But you have written that the current TV/telephony infrastructure will cease to exist.

Gilder: Yes. I mean TV and telephony. Telephony defined as a centrally switched voice-optimized network, and TV as analog broadcasting. These are not going to be the important communications channels by the beginning of the next century. The Internet is going to take over. TV broadcasting in particular is going to be an orphan technology by that point.

Petrie: Won't they also broadcast 10 million TV channels for people to tune into?

Gilder: Then this wouldn't be television. The very nature of television is you have dumb receivers, and you broadcast a selection of channels chosen by the broadcasting companies. To the extent that you can find the stuff you want around the globe, store it, time-shift it, and interact with it, it's no longer TV. That's the triumph of the PC. I think TV will be a subset of computer technology, and you'll be able to watch the Superbowl on the appropriate screen that you choose. There will be a variety of different display technologies available, and portable computing devices will be able to connect to those displays probably through some infrared or RF link. The whole idea of the TV box will disappear, although there will be displays that look like TV displays.

Petrie: When Tim Berners-Lee created the Web, he intended authoring and editing to be as fundamental as browsing. Somehow, that got lost in the initial generation of browsers, though it seems to be coming back. Will creative interactivity win out over the passive Web TV model?

Gilder: I think this is implicit in the triumph of the PC or network computer--what I call the teleputer. My favorite description of this outcome is that one person at a workstation will have more creative power than an industrial tycoon of the previous age, and more communications power than the broadcast tycoon of the television age. Certainly the same technologies that make possible the evolution of the Internet to this new, broadband nirvana also will endow the terminals on the Net with fabulous creative potential, and you'll be able to make whole films on single workstations at costs that are radically below the cost of the typical Hollywood television offering.

Petrie: So you think that the network computer will empower people to be authors?

Gilder: Yes. I believe that the Java runtime engine will be refined to the point that it is a robust vehicle for authorship. Already, as Robert Brodersen of Berkeley's Infopad project has reported, any person with a Java browser can design an entire finite-state machine function, including schematic entry, logic synthesis, VHDL layout, language generation, and even simulation of the final design.

Petrie: But will that make a difference if there's no local storage?

Gilder: If you're going to be an author you may well link local storage to it. There's nothing in the network computer that says you can't have local storage if you want it.

Petrie: Doesn't that violate the whole idea of minimal network computing?

Gilder: I believe there will be minimal network computers. I expect to find them in every hotel room, on my tray table in the airplane, in libraries, schools, all over the place. They'll be a terrific improvement of the overall availability of computing power to masses of people. Now I don't believe that the absence of a local storage will matter for most of those applications. You're asking me if the people who want to create a major new software program or VRML content are going to use network computers? They may when they're on the road and they want to make some small change, but I think they'll probably be sufficiently sophisticated to actually run the terabyte drive that they have linked to their machine. And I don't believe the distinction is really as clear as you imply, because I think the idea of remote storage is acceptable to most people. Let's use the example of people designing microchips on Sun workstations, where the actual design to which various engineers are contributing is at a server, perhaps not even in the same building. If you move that server across an all-optical network somewhere, I don't think the change is very significant.

Petrie: Another type of creativity is seen when a user determines what is broadcast, like with PointCast. And you can certainly do that on a network computer.

Gilder: Yes, the first thing the Internet brings us is choice. That's its great contribution, because choice is good. And as I've maintained at length in Life After Television, it moves us from a lowest common denominator culture, decided by a bunch of broadcasters, to a first-choice culture resembling a Borders bookstore, with 150,000 titles and thousands of magazines on the shelves. It's a better culture, and this is the moral message of the Internet. Its decisive improvement over Hollywood and TV is derived from its offering of cornucopias of choice. And so I think that's absolutely vital, and the network computer makes these choices more robustly and readily available to more millions of people.

Petrie: While we're on the subject of choice, what do you think about the First Amendment issues currently before the Supreme Court concerning the use of legislation to block the viewing of pornography on the Internet?

Gilder: My belief is that you don't have to change the laws to deal with the child pornography or snuff films or other extreme cases that are employed to justify sweeping regulation of the Net. I think they're a distraction, they're a red herring. My 12-year-old son is on the Net all the time and I'm eager for the evolution of techniques applicable at the terminal to lock out certain domains of the Internet to children. But I think that porn of sufficiently revolting character is widespread all over the society. If the politicians want to crack down, how about the Spectravision boxes in every hotel room?

To focus on the Internet bespeaks another agenda. And I don't approve of the other agenda, which is to control this new communication system, because the way they controlled the old one has been a disaster--it has greatly slowed the extension of bandwidth and led to this kind of optical illusion, or nonoptical illusion, that bandwidth is somehow scarce and difficult to create.

Petrie: This also reminds me of the paranoia about security on the Internet. For example, online banking on the Internet requires you to have a US-grade security browser, a user ID, and a password to access the same service you can use three digits to access by telephone.

Gilder: I completely agree with that observation. There is a paranoid note in this encryption and privacy issue. But I think corporations do have a real problem. If you're sending billions of dollars of value across the Net, you've created a huge incentive for people to break your codes and skim off some small proportion of your value flow.

Petrie: But we're not talking about financial transactions. Those have been secured for quite some time.

Gilder: But how? They're using the DES (data encryption standard) algorithm which is a fairly low level of encryption employed by banks for transmitting funds. I know it works--I really don't agree with the thesis that the Internet is insecure--however, I'm willing to imagine there are applications where you want more security than currently exists. But we are talking about a lot of issues here all at once. The encryption issue about terrorism, for example. Banning strong encryption in order to thwart terrorists means that only terrorists will have strong encryption. I really think that's accurate--or at least only foreign countries will be able to have encryption. So the encryption technology will tend to move overseas where it's completely beyond the reach of US security.

I think the government's going to figure out that they want the best encryption to be American. And to disagree with the current wisdom, there is an arms race. The arms race is with the terrorists. There's no question about it. But there is no quick technical fix for this arms race. The government has to understand this is a dynamic rather than a static arms race, and you won't be able to solve the problem by treaty. The problem is that of evil in the world, and it's something all of us, including people who want a wild and woolly Internet, depend on our government to address.

Petrie: In the context of the Internet, you frequently portray the government as the villain. To quote Life After Television, "A federal program of fiberoptic freeways will end like the great concrete freeways built in the Third World, running from nowhere to nowhere and used chiefly as shelters for the homeless."

Gilder: I really do think that networks will be best maintained through some market structure, and I would prefer to improve the market. The government does not have a lot of additional resources available to lavish on creating vast new communications networks. And I don't think this is bad. I think ultimately the most robust Internet will emerge from a market process, from market signals. The problem with the market signals today is they're distorted. You know, your T-1 lines that cost the phone company $100 to deploy cost $500 to $1,600 a month to lease.

Petrie: Tom Kalil, a senior director to the White House National Economic Council, doesn't believe government needs to lavish money on maintaining the Internet. He said they should instead feed the research that spawns successful technologies using relatively small amounts of money.

Gilder: In the US most research and development is financed by the government. I don't think this is a necessary, permanent condition, but it is the existing system that we have installed.

Petrie: Isn't this good?

Gilder: You know, it depends. It's not necessarily good. It's bad to the extent that it means that political lobbying becomes the chief governor of rewards in research in America. And some of this is already happening, particularly on environmental issues.

Petrie: Let's talk about computer science, which has been funded by DARPA for the last 30 years. DARPA has said more or less that they're planning to get out of that game. What will this do to computer science in this country?

Gilder: Maybe DARPA says it's not going to be financing as much computer science, but this illusion that there are no defense problems anymore is not going to survive the next big catastrophe. You get one nuclear explosion anywhere and there's going to be gouts of money thrown at universities to push whatever technologies are emerging. And certainly computer technology is such a core capability for meeting any of these threats that any federal money will yield benefits for computer science departments.

Petrie: But you don't agree that the government should seed technology development other than through defense spending? For example, the October 1996 initiative by the Clinton administration to fund development of network technology--you and your friend Newt Gingrich aren't going to support this?

Gilder: The reason the government can present a list of successfully sponsored technologies is that during the period when the R&D for these technologies occurred, government dominated all research and development. We came out of the Second World War with radar and a myriad of new technologies. We moved into the Korean War and the Cold War, and it doesn't matter what technology you talk about, it was financed by government. So of course if you follow the pedigree of any technology you can name, you can always find a government dollar in there. Now if you want to imagine that government dollars have some magical property that renders them especially crucial as seed dollars, then you can show that government financing is indispensable to any technology.

There is another model to look at, and it's worked very well, although perhaps not as well in some respects as the US model. In Japan rather than 50 percent of R&D financed by government, the figure is less than 5 percent, which is contrary to what people imagine. You have the Hitachi University, and so on, and they've developed all sorts of other means of seeding and financing technology. But in the US where the government does it for nothing, US companies don't have the same incentive that Hitachi does to create a Hitachi University.

Petrie: So you believe if the government gets out of the research business that we'll start getting GM universities?

Gilder: Yes, we'll get GM universities, we'll get all sorts of private ventures that perform long-range research on contract. And we'll have a variety of universities funded by different forms of endowment. That is imagining this wonderful libertarian disquisition where government cuts back substantially on the proportion of total national income that it captures. However, I think under existing circumstances I would vote for most of those programs that you're talking about, and I would tell Newt Gingrich, if he asked me, to vote for them.

One of the disputes I have with the organized libertarians is that they assume we have a perfect world and they don't understand the significance of the installed base. The installed base today, the legacy system, is overwhelmingly oriented toward government financing of R&D at the university complex in America. I agree it would ultimately improve the quality of R&D if most of this were wiped clear. Of course there would be a transitional phase which would be quite catastrophic for hundreds of thousands of people in universities and government labs. Thus it would be nearly impossible politically to cut it back, which means, as a practical matter, we have to optimize the current system.

Petrie: What about the Sematech consortium's efforts to improve US semiconductor chip production technology? Isn't that an example of successful government funding of research?

Gilder: No, do you know about the photolithography disaster that happened? Sematech decided they had to retrieve GCA, which they viewed as the historic American best hope for photolithography. So they appropriated a substantial proportion of their money for photolithography and channeled some $20 million to GCA, the dominant US stepper company (like a reversed slide projector, steppers project chip designs inscribed on a photomask or reticle through a series of reduction lenses and onto a chip).

By keeping GCA alive beyond its natural span, they competed against Ultratech, which had an interesting 1-X stepper technology. (Ultratech simplified the stepper by moving the reduction optics away from each machine on the waferfab line back to the photomask maker.) And they competed against ASM Lithography, a company with mostly American customers and mostly American technology that happens to be head-quartered in the Netherlands, which rendered them beyond the pale for Sematech. Essentially what they did was to weaken all the forces that might otherwise have generated a successful photolithography alternative to Canon and Nikon, which themselves are competing intensely and supply American companies very satisfactorily. If there's a complaint that Canon and Nikon have too many links to Japanese semiconductor firms, then maybe ASM could be an alternative source, but ASM doesn't apply because they're foreign.

Petrie: So instead of having a broad base of support for the industry, they balanced everything on one leg?

Gilder: That's right. And the result was GCA was kept alive for a few more years, and Ultratech was weakened and didn't emerge as a real competitor. The combination of step-and-scan technology which originated with IBM and Perkin Elmer was disadvantaged, because Perkin Elmer essentially left the business when Sematech came in and decided GCA was their champion. Really it had no good effect in photolithography, and that was the key area that they choose to stress.

They had already shifted from DRAM (dynamic random access memory) to SRAM (static RAM) to photolithography. And after they screwed up photolithography, they moved to functioning as a kind of standards body for interoperability of other technologies and other cluster tools. Their chief function was to compete with Applied Materials, which was establishing a standard for cluster tools. And so because of Sematech there were two standards: Applied Materials' standard, which really was the leading US standard being extended around the world, and the MESA standard, which was the Sematech contrivance. You know, they're still trying to find out what to do, and yet they have the temerity to claim they somehow were responsible for the recapture of the semiconductor industry that we never lost to Japan in the first place. There was a blip caused chiefly by the changing relationship between the dollar and the yen. Of course, if the yen suddenly doubles against the dollar, all Japanese production doubles in value in relation to US production. But this was a transient. I think the whole Sematech story is dishonest, and it's what happens when government launches a program.

Petrie: George, I want to thank you for a fascinating interview. Our magazine views you as a model for how Internet technology should be understood and measured, and we are delighted to include you in our premier issue.

Gilder: My pleasure.

George Gilder is a contributing editor of Forbes ASAP and fellow of Seattle's Discovery Institute. He is well-known as a leading architect of supply-side economics. Gilder has had several best-selling books including The Spirit of Enterprise (1986; revised version 1992), Microcosm (1989), and Life After Television (1992; updated paperback edition 1994). His forthcoming book on the future of telecommunications, Telecosm, is due to appear in 1997. In 1986, President Reagan awarded Gilder the White House Award for Entrepreneurial Excellence. Gilder was made a Fellow of the International Engineering Circle in November 1996.

PCS AND THE WIRELESS MARKET

Personal communications services (PCS) are one reason for the dynamic growth of the wireless communication market. PCS is used to designate wireless services, including telephones, personal digital assistants (PDAs), and wireless PCs, that transmit signals at 1,900 MHz (cellular phones transmit at 850 MHz).

There is currently hot competition between the two dominant standards for PCS networks: TDMA (time-division multiple access) and CDMA (code-division multiple access). Most existing TDMA is based on the PCS-1900 architecture of GSM (global system for mobile technology), the European digital cellular phone standard used by about 150 wireless providers in 86 countries. TDMA uses exclusive narrowband frequency channels, subdivided into time slots. US TDMA, which is compatible with the US cellular analog standard, is the IS-54 standard.

CDMA technology was developed in 1989 by researchers in the US after European adoption of GSM and US approval of TDMA as a standard. CDMA transmits all messages at the same time and the same frequency and spreads them across the spectrum with pseudorandom noise, creating a wide-bandwidth signal that is unscrambled at the receiver base station. Support for CDMA has taken off in 1996, and as we go to press CDMA appears to have a slight edge with US network developers. The CDMA standard is IS-136.

CDMA is not restricted to the air: CDMA systems also support the sharing of satellite spectrum and upstream communications through the noisy lower channels of cable TV coax. Motorola's Iridium LEO (low Earth orbit) satellites, due to begin launch in 1997, will use TDMA, and Loral and Qualcomm'sGlobalstar and TRW's Odyssey, also scheduled for 1997 launch, will use CDMA.

GILDER'S PICKS OF KEY INTERNET TECHNOLOGIES FOR 1997

  • 1. Broadband digital radios that replace the 416 hardwired radios in a cellular base station with one programmable radio that can take the entire cellular band and mix the signal down to baseband, where it will be converted to a digital bitstream that is modulated, demodulated, filtered, and channeled in digital signal processors. This technology will reduce a broadband base station to a briefcase.
  • 2. The Java programming language and platform that will increase the productivity of programmers by a factor of three or more and allow component software on the Net.
  • 3. The Java teleputer, an Internet access device, in every hotel room, on every school desktop, library workstation, kiosk, and airline tray table that can use an ever-expanding array of Java programs, including Java Office suites.
  • 4. Wavelength-division multiplexed all optical networks based on broadband optical amplifiers that allow cheap bypass of phone company switches.
  • 5. Hardware transaction processors that turn personal computers into secure pay-per-view Internet sites for transactions from millicents to millions.
  • 6. Household ethernets running on telephone wiring.
  • 7. Smart cards turning your PDA into an Internet transactions processor.
  • 8. Direct broadcast satellites converted for Internet downstream service.
  • 9. Low Earth orbit satellites that bring bandwidth on demand to computers around the globe.
  • 10. Mediaprocessors on DRAM that allow one-chip teleputers for Internet access.