Interview: Eric Schmidt on the Intranet Wars
IEEE Internet Computing, September/October 1997, pp. 8–15
Can the Java Virtual Machine Really Challenge Microsoft?
Java has a lightweight threads model in which context switches are very fast. Because of that it's possible to have extremely high-performing, parallel protocol solutions. The development of these applications by talented programmers should be much more reliable than their equivalent using Windows NT.
— Eric Schmidt
In the wildly competitive market of intranet technologies, the war is on between the Java and Microsoft platforms for software development. Not that Java is dominating the current market — it's still far from widespread in the enterprise, where to most IT professionals the fact that it is interpreted and not compiled still means slower, relatively untested solutions. However, a loosely coupled consortium of industry heavyweights — including Netscape, IBM, Oracle, Sun, Novell, and the now wildcard Apple — are betting Java's open standards and multiplatform environment will let them offer a significant alternative to the Windows/NT franchise. (See also the IC Online, interview with John Paul, head of server technology at Netscape.)
Eric Schmidt continues to be a central player in this struggle for market dominance. Schmidt, who earned a PhD in computer science from UC-Berkeley in 1983, has spent a career being in the right place at the right time. A researcher at Xerox PARC when the personal computer was being defined, Schmidt subsequently spent 14 years at Sun, culminating in his promotion to chief technical officer in 1994 — when Java arrived on the scene. Responsible for overseeing Sun's core technologies (Sparc microprocessors and the Solaris operating environment), as well as its research division, Schmidt oversaw the transformation of Java from an interesting programming language to Internet phenomenon to strategic asset as Sun gained partner after partner in Java development.
In April 1997 Schmidt came to Novell with a mandate to propel the networking giant from its proprietary technology base into the IP world. The year has been a tough one for Novell. After failing to meet its 2Q projections by a wide margin, Novell had a brutal 3Q, in part due to Schmidt's tough decision to halt shipments to distributors to clear out backlogged inventory. With revenues down 75 percent from 3Q 1996, Novell's future as an industry leader may be uncertain. Schmidt's immediate response: to cut the workforce by 18 percent, to revamp the executive structure, and to relentlessly focus the company on the development of IP-based network services. The new, Java-enhanced NetWare is due out by mid-1998. Schmidt's long-term solution, bolstered by bringing in key industry figures John Slitz from IBM and Chris Stone, founder and CEO of Object Management Group, is the move forward to a new, Java-based franchise.
IC's Editor in Chief Charles Petrie and Acquisitions Editor Meredith Wiggins asked Schmidt to talk about the idea of using Java and other interplatform standards as a kind of virtual operating system for running network services, and about his attempt to turn Novell into a company that has those kinds of products.
What follows is apparently vintage Schmidt, who is known for his clear-spoken, open, and laid-back style. We hope it helps you decipher the confusing mix of products and services that characterize the current intranet market.
Perhaps the place to begin is chronologically with the genesis of Java at Sun. You're known to have pushed hard within Sun for Java to be an open platform. Could you tell us a bit of the story behind that?
Well, in retrospect it appears as though we were all brilliant, but in fact we were just trying to get through the day. I don't think anyone, at the beginning, had any idea what the outcomes would be. We just knew we had an interesting technology.
Mosaic had just appeared, and a number of us at Sun were studying the Mosaic model. Mosaic as you know was basically free. I went around in early 1994 telling people this was a pretty interesting model. The joke then was, "Do you know what URL stands for? And the answer was, "Ubiquity first, revenue later."
My contribution, if I had one, was to push the ubiquity model through the necessary politics at the time. Today this model is well understood, but at the time it was novel. Scott McNealy, the president of Sun, understood it because he understood the notion of brand identity and platform capture.
On the Internet ubiquity also implies open standards. Did you have to fight for this scenario within Sun?
Perhaps, but it was so obvious even then that the Internet would reject closed approaches that I simply didn't hear those objections. Remember that Suns were used as a reference model for TCP/IP, so the company was familiar with the model of open standards.
At the time, I was also looking into the concept of architectural franchises. The easiest way to understand this concept is to ask, what is the value of the architectural franchise that Windows has? It's equal to the stock value of Microsoft, which of course is a very valuable company. So having a set of APIs that a very large number of companies have implemented against turns out to have an extremely high economic value. If you want to buy Microsoft's architectural franchise, you have to buy the company.
The question clearly was, how do you establish an architectural franchise that's not controlled by Microsoft (or anyone else)? The way you do that is ubiquity plus open systems plus getting stuff out there--literally the give-away strategy. Mosaic had done this but not at an API level.
What we were able to do in the case of Java was to establish what is potentially an alternative architectural franchise.
Tell us how that came about.
Bill Joy was the first to see Java as having implications that went beyond the Web. The standard speech I gave about Java in the beginning was, in essence, "Your pages are static, they're boring. With Java they become dynamic." On the Web the interoperability issues are paramount, because who wants a solution that only runs against one particular Web server or one particular client? So Java was an easy sell here; everyone understood this.
But then Bill and others with my help began to push a model that took Java beyond the browser. The original Java license was to Netscape in May of 1995, and then to Microsoft in December of that year. These deals were, of course, browser-centric. In March of 1996, however, Sun did the first deal with a company where Java technology was no longer tied to a browser, but could in fact be embedded in a platform — and this company was, coincidentally, Novell. This is the license that was subsequently modified for all other vendors.
At this point it was clear we could position the APIs of Java as an architectural franchise that was independent of the browser and platform. And from then on it was a series of events culminating in the 100% Pure Java program, which was announced in March of this year.
Those were probably the six most intense months of my life. We knew we were going to change the world — we didn't exactly know how, mind you. It was very satisfying. I'm also happy, for me personally, that it ended, because there's only so much of that you can take! But the end result was now instead of having 15 people working on Java, there are 600 at Sun, and thousands at IBM, and hundreds at Novell.
So here you are leading the charge for the Java VM concept. Why then did you choose to come to Novell?
It was time to work on something new, and what I was interested in was network services. It's been the holy grail of computing for a long time. Novell is the largest networking company in the industry, and although perhaps damaged by previous forays into other things, it's been my observation that networking has a very large flywheel — it's hard to throw us out.
Plus I saw a situation where there was an opportunity for leadership. It's a huge base, larger than I could possibly describe to you. The company has excellent technology, and there's a product shortage. What this said to me was perhaps the wrong focus or the wrong product decisions. These are problems I think I can fix, working with the right people. So I took the job.
So one asset Novell has is its huge installed base. What are the technical assets Novell can bring to this market, and what technical issues do you have to overcome?
Before I answer that, I think it's important to underscore that this is not a technology play — this is business. Technology is only a part of it. Speaking as a technologist, I can tell you the first problem with technologists is we assume the best technology wins. Of course it doesn't. So what I would like to do is have both a great business and great technology, although the two are not necessarily linked.
On the business side, we are busy making the necessary changes to get focused on our customers, plus the internal changes needed to focus on objectives, get alignment--all the kind of things you would imagine in a situation like this — they're very straightforward.
Absolutely. And that's what customers want. You're a technologist; you have the same bias I do, and you assume that customers share it. Some don't. Some actually read Time magazine and make a technical decision based on the cover.
Now on the technology side of things, in terms of assets, the company has historically had the most scalable and most powerful dedicated kernels for networking protocols. And indeed we have a new kernel called MPK (Multi-Processing Kernel) in development inside the company, which is part of the next version of NetWare (code-named Moab). This new kernel allows for control over preemption. You can essentially choose to preempt your protocol or not, which gives you essentially complete control over Quality of Service. I don't know of anyone else who can do this. It's a huge differentiator. So if you, for example, wanted to build a real-time service, you can build that real-time service on us. You cannot do it on NT and Unix at the same level of control.
Usually with more control comes more management headaches.
Of course. That's why we're a specialized platform. This is not for the faint of heart. But if you're a developer who cares about protocols, we're the one to back.
I'll give you some other ideas. Our Java approach is to take the virtual machine and put it inside the kernel. Because we run native on the naked hardware, by embedding the virtual machine architecture at the kernel level, we can do all sorts of — well, I call them "stupid pet tricks" — to make this stuff very, very interesting to the protocol person — much better control over performance, much better control over scalability. So we're saying if you have a solution that is network-centric — and this is a pretty big market niche — then we'll be the one to beat.
Examples would be the encoding of EDI systems, or specialized financial transactions?
The easiest example to use is the Web server. When you use a Web server you think of it not as a machine, but rather as a set of protocols. So from your perspective on a network — via a client, device, whatever — the Web server is nothing more than a set of protocols, which have certain performance characteristics. That's how I want you to judge our products.
Let's say you're on the Net, and you type www.microsoft.com. Do you know what kind of operating system that machine is running? Would it surprise you that at one point they had to switch it to run a DEC Alpha machine? Would it surprise you that Netscape mixes and matches their FTP servers and their mail servers and their Web servers based on performance?
No, not at all. Because they're smart network people.
Yes, and they choose best of breed. The argument I'm making here is that performance matters. To the customers we talk to, performance really matters.
By doing the right things in the platform, we can have a differentiated platform offering for network services. We then in a parallel effort have to bring out a set of network services. My experience with the phrase "network services" is that it's very confusing to people. It's a term that's almost content-free, which is one of my biggest marketing frustrations. If you think of Java, independent of what Java is, it has all of the positive aspects of a brand. You know — it feels good, people are excited, they want to be part of it. Taxi drivers think it's good. We need an analogous name for this category of things called network services.
Let's go back to the Web server example. From your perspective, the type of hardware or software that is at www.netscape.com — or the fact that it is not a single server but rather a round-robin DNS (Domain Naming System) for 20 different servers based on your locality — you don't care. It's not a machine, it's not even a piece of software. It's a service. So what are some network services that might be more visible?
One of the most basic is a directory — finding things. A directory allows you to have a place that has your name, your password and personal things, or a machine's name, a machine's password and personal things. It's an arbitrary name-value pair. The industry has consolidated around a protocol called LDAP (Lightweight Directory Access Protocol). We have NDS (Novell Directory Services), which is a superset of that.
Another example of a network service would be distributing software from the server by user rather than machine. We have a product called Novell Application Launcher that lets you go from machine to machine. You type in your name and password, it knows who you are, it knows what programs you like to run, and it brings them from the server to the client and gets them installed properly. This is a hot new network service that Novell actually has a lot of prior technical experience with.
BorderManager is yet another example. It's a proxy/cache, firewall, gateway solution. You put it between your company and the network. Because it integrates with NDS, servers running BorderManager can find each other and replicate locally, so it's a fascinating bit of technology. The permissions part of the firewall sits there, does the connection, and then authenticates to you as opposed to the machine. So you and I can have a private connection (a virtual private network, VPN, using IP tunneling protocols) based on who you are as opposed to who your machine is. Nobody's ever done that before.
These are in some sense directory-enabled firewalls.
That's right. And by the way, I'm not suggesting that Novell is the only company that's going to play in this space. We just have to be first.
How do the new network platform and services you're bringing to the market relate back to your intent to become a new architectural franchise?
We believe what will develop is a network services framework, a franchise that's higher level than Java, based out of Java, that allows applications developers to get the real value out of these networks. So again — here's Novell. We're busy trying to solve these problems of software distribution, licensing, and so forth. Wouldn't it be even more interesting if we had partners trying to do it, too? And in order to do that, we go in there and say, "Java is the answer. Here are the libraries, here are the services, here you go." Netscape, by the way, has an initiative that is similar to this — though it's not completely Java-based — called Netscape One.
So programmers are going to write to your network services franchise.
Yes. Ultimately these franchises work because programmers believe the religion, they believe there will be lots of platforms. So they take their scarcity, which is their time, and they build Java services to run on top of these platforms. And that's where I think you really get the synergy from all of this together. You start off with a fast protocol engine, you get a Java virtual machine that is very fast, you get a set of services on top, and then you add other companies — when X and Y do their start-up, they build some killer application in Java that runs on top of this framework, which is, by the way, not going to be simply a Novell framework.
The key here is to persuade people to build on top of this virtual machine versus Microsoft.
It's clear that there are only two choices. Again, software entrepreneur, two choices. One is Win 32 as it evolves, and there are many programmers developing to the Win 32 server and client APIs. How many? There may be one or two that aren't. It's my impression that every Java company is also hedging its bets with respect to Win 32. With respect to Java, however, there are so many companies supporting it, and so much innovation around new libraries, new classes, performance issues, the benefits of the model, and so on, that it makes sense to me it will be the other player.
Now let me give you some technical reasons why Java's a better platform on the server than Windows NT. Java has what is known as a lightweight threads model in which context switches are very fast. Because of that it's possible to have extremely high-performing, parallel protocol solutions. The development of those applications by talented programmers should be much more reliable than their equivalent using Windows NT. Nobody seems to want to focus on this, because Java has been so focused on the client side.
Windows NT does not have lightweight threads?
Windows NT has its own threads that are not lightweight and in fact are very painful. But in any case, they're in the wrong language. If you're programming a network-centric service, whether it's on the client or the server, Java is the perfect language for scalability and integration.
We expected you to push Java for its "write once, run everywhere" capabilities, not as a superior language for doing these type of low-level operating system functions.
"Write once, run everywhere" is a separate issue. If you talk to people who are Java experts they can do a better job than I of articulating the precise reasons, but the ability to do parallelism at the thread level and in general the way the Java object model works makes it a lot easier to build these applications.
So programmers will decide they can make better use of their scarce time by using Java.
They get better code. And by the way, even Microsoft agrees with this. They say Java's a great language. I think that even if you're building for the Microsoft Windows platform you will over the lifetime of a product have a more productive and more sophisticated programming force, and they will build better quality programs, using Java.
Is the key to success the application of Novell's prior expertise in network management to this new virtual operating system?
Over time. Again, I'm going to give you a CEO answer as opposed to a technology answer. You have to have a mixture, a good "virtuous cycle" between the products you build, the value the customer sees, and how the customer actually buys and is supported. These cycles, if they're done right, become self-reinforcing. Better technologies create more products, more products create more happy customers, more happy customers make the channel bigger, the channel gets bigger and you can afford more products. That's the nature of successful business. I think taking our current franchise and adding this interesting technology gives it a boost right now — although the right thing to do, speaking purely from the perspective of technology, is to build the same franchise out of completely new components. In other words, reengineer everything I've talked about to be based on Java, TCP/IP, complete open standards, the latest this-and-that. Unfortunately, we can't do that instantaneously, and so we have to get there iteratively.
For me the issue is how do I get the cycle times to occur as fast as possible? Part of that is to make sure we use our own technology. Part of it is to work with partners, and part of it is to find the areas where customers see distinct value in our platform and then promote that as hard as we can.
Are you in fact limited by the ability of your customers to move to the new standards and to become more Java-compliant?
To some degree, although I can tell you that that's often used as an excuse for lack of innovation in companies. I feel like our channel is strong enough and our customers are large and varied enough so that even if we invented another Java, if you will, we would be able to absorb it pretty quickly. There are always people who are early adopters and others who lag behind. What customers tell me over and over again is they're tired of all of these half-solutions that are coming out of the industry. They want us to make it all work together. In other words, they perceive all of these incessant open systems wars and protocol wars as being ultimately injurious to the problem they're trying to solve.
We as computer scientists assume that most people wake up in the morning and want to be working on technology for its own sake, where in fact the inverse is true. Most people view computing as a tool, and sometimes a pain in the ass. This heterogeneity is an opportunity for us, I think.The fact that there's so many things you have to mix and match is really killing our customers.
So what the IS people are after is homogeneity?
Consumers have the benefit that they can ask for mutually inconsistent things. They would like the conformity of homogeneity, but they would like the choice of heterogeneity.
Which is why the best solution is open standards.
Right, that's why we believe so strongly in open standards. If there's a single standard that gives you homogeneity, then you have multiple choices in terms of vendors, and you can evaluate Novell against its competitors and choose best of breed.
Let's turn now to some other technical issues. What about replication and consistency management? Do you support that?
We've already announced a product called Novell Replication Services. If you have a set of Novell file and print servers, we automatically replicate and cache all of the file and print information.
Do you do distributed files, and consistency within distributed files?
The next version of NRS does that; in fact there's a clustering technology called Wolf Mountain that does it with a very, very sophisticated cache and protocol. This is, of course, a very interesting issue, and from my perspective, our core market.
And you don't, by the way, view the Wolf Mountain Group situation as a threat? [In the early part of 1997 Darren Major and Jeff Merkey, co-leaders of the Wolf Mountain team at Novell, and Larry Angus, its operations manager, left Novell to found a new company, Wolf Mountain Group. Their intent is to develop the Wolf Mountain distributed server technologies for the Windows NT platform. --Editor]
We "allege" that the three individuals involved, who by the way are not functioning as a start-up today, stole our property, and they are being sued. There is a court case that you can read pages and pages of documents on if you'd like to see the details.
So we won't see the Wolf Mountain technology showing up in Microsoft any time soon?
You will see it running on top of Microsoft Windows and NT in a product offered by Novell.
And you expect that to be released some time next year?
It's linked right now to the next version of NetWare, because there's some serious architectural changes involved around the kernel.
What we see as a technical issue here is consistency management of replicated files. This is a deep, hard problem that hasn't yet been solved by the people trying to promote distributed databases.
Well, be careful. At the pure file textual level we can solve this problem now — that's what NRS does. The reason we can is because NRS is not application-specific. The problem with any kind of overlay of content over files is the mechanism by which this is done is application-specific, so you will never solve the problem you've just described unless you're willing to constrain what the file is used for. All you can do, as long as there's a back-up disk, is look at this bit change, and change the bit over there. And then there are all sorts of latency and update issues and so forth.
The other thing we're not doing, and I want to be clear about that, is we do not do transaction roll-back in the sense of genuine logging. In the real world there's a hierarchy of servers. First, there's the server you basically throw in the room, and you don't worry too much about whether it's up or down. We perform that role well — that's our historic base. Then there are fault-tolerant solutions that mirror, keep copies of things, keep things consistent. Novell is, as best as I can tell, a leader in this industry by unit volume, although nobody is aware of that fact. The third solution is true transaction roll-back, a guarantee of consistency — if the power goes off, you flip the power switch and get complete recovery. Absolute fault tolerance is much, much harder. And it's obviously application-specific.
It's especially hard in a distributed environment, where you have to depend on the application programmers to put their roll-back points in at the right point in every application. Now we're talking about distributed transaction management, another kind of network service.
Yes, and here, if you look at the industry, the only way that has been done is at an application vendor. Oracle has brought out a product that actually runs on two machines, monitors itself, and understands if one machine goes down. Of course it's not an inexpensive solution.
And that's only for machines going down. That doesn't account for cases where the transaction simply fails.
Yes. I've always been very interested in this area, the whole notion of how to do checkpoint roll-back. This is the so-called "atomicity" problem — how do you ensure that the transaction succeeds or fails as an atomic unit? I did some of my undergraduate work on this.
The typical way people approach it is to say, "I want the server to be completely reliable, I want it to be all logged, I never want to miss anything." Then you present them with the bill, and they begin to realize that maybe there's a gradation here. What the industry ultimately did was to center on what is termed "high availability," which means that you're up most of the time, but we're not talking about a guarantee that you would take to the bank. I think that's a reasonable solution, because the cost of that last one percent is very high.
Do you see your company staying in an IT environment, then, not an e-commerce environment where that last one per cent would be essential?
That's a presumption on your part. It's not obvious to me that e-commerce requires that level of accuracy. The actual MIS systems that are now being integrated into the network to handle money are Tandem machines that are fault-tolerant and fault-similar. We're not going to build these banking types of machines.The interesting part of e-commerce to me is that now that we can talk to these machines, what can we do with them? All the services, the buying, the retailing, the selections, the evaluations, the projections, the forecasting, all those kind of things are not going to occur on those specialized machines. There's a huge space here.
The other interesting aspect of this is that it requires a highly secure solution end to end, and it requires directories that know where people are, and directories to put their keys in. Once you have directories and security, then you have what it takes to begin to do e-commerce. And this is where the money is. After all, ultimately business is driven by its ability to at least generate revenue, if not profit.
What I'm arguing here is that the money is going to be handled by machines that are already in place. We're building out connections to those machines, because we're not going to replicate the banking system.
So you think the actual settlement mechanisms are going to be run by the financial institutions?
Oh, absolutely. And anybody who thinks otherwise has not spent any time with the banks. The reason why banks are the way they are is because they're governed by a set of laws and a fiduciary responsibility that is significantly different from the rather cowboy-type approach we bring to networking in the computer industry. The level of reliability we offer would be unacceptable to them.
It would still be nice if there were network protocols to facilitate writing these type of applications--to avoid, for example, atomicity problems.
I agree with this. I will tell you the reality is that the transaction roll-back/atomicity issues are so painful that it's a specialized industry dominated by a few players. They're incredibly competent, and the size of the business is relatively small. The broad customer base — that is, the PC industry as a whole — does not use these facilities. You don't find them walking around in your Windows NT or Unix machines.
But that's what the transaction server is about.
How many of those are there on a network? One. How many other kinds of servers are there? Ten thousand. This is a very, very small market today.
But in 1992 you would have said that the Web server market was very small.
I would not. And that's actually a good example of the difference. It's very clear that offering a general-purpose service for unstructured data is a very large market. That's what the Web server is, because it's independent of content and application. The only issue was how quickly would data get re-purposed on the Web? And the answer turns out to be a lot faster than any of us thought.
So you think that this sort of transaction management with roll-back will continue to be a niche market?
For at least five or 10 years.
Okay. We're writing that down.
What you're describing is an important issue. It's just like saying I'd like a supercomputer in my bedroom. I would but I don't want to deal with the noise. The cooling system. The power bill. Water-cooled mainframes use as much power as small cities. You don't really want them around your house. So there are real trade-offs.
As a final word, can we ask you to share some of your experience with those of our readers who are making the shift from technical to business-related jobs?
Businesses are mostly people-intensive, and technologists usually discount the value of personal relationships. Managers can use influence as well as "being right" to get things done. If you love people, you can easily make the transition.