IEEE Internet Computing

Greg Sands on Netscape’s SuiteSpot

On 27 August 1997, IC’s Editor-in Chief, Charles Petrie, and Acquisitions Editor, Meredith Wiggins, spoke with Greg Sands, senior product manager for SuiteSpot, Netscape’s integrated suite of server software for Web, messaging, and crossware applications.

What's the relationship between SuiteSpot and Netscape’s stated vision of the "Networked Enterprise."

SuiteSpot is intended to provide a complete solution for an intranet, including capabilities for document publishing, collaboration, and messaging, and a platform for deployment of business applications. It incorporates, for example, our Enterprise server, our Messaging server, our Collabra server for group discussions, our Calendar server, and Netscape Directory Services.

The Enterprise server offers two primary capabilities. The first is content management. Enterprise enables every end user to have a directory to publish their documents and to maintain version control on those document. It permits others to check in or check out new versions. It lets you manage the links among documents. Enterprise is also an application server that provides the database connectivity and the objective crossware, as well as support of IIOP/CORBA to allow communication with mainframe applications. It provides a Java and JavaScript run-time environment so that you can have application logic on the server as well as on the client, and even on the database within its stored procedures.

Gee, what doesn't it do?

What doesn't it do? Well, what the standard edition doesn't do is provide all the advanced management services for sophisticated intranets. We do have a professional edition of the suite that does in fact provide many of those capabilities.

What does it add?

It adds the ability for a company to be its own certificate authority, to issue and manage X.509 certificates for its employees, for example. It also provides the capability to manage client desktops: I can customize the client, set client preferences from a central location for all of the users in my organization, and then do automatic software downloads to those client desktops. So again, we manage down the cost of ownership such that when you've got a new application, new version of the software, new component of the software, or new plug-in, you don't need to touch every desktop. You can simply set a tag within the configuration file and have it automatically download the next time that the clients start it.

So you have all these different pieces of software, and you're talking about various controls—you have version controls of different pieces of software, and they work together differently, or some versions of different pieces of software only work together with other versions of software. Do you provide any general support for managing consistency among the software versions?

I think there are a couple of aspects to that, one of which is that we obviously do a fair amount of quality assurance and testing, and we test in specific design scenarios. So for example, it is important for us to support not just Communicator, but also Navigator 3.0 and Internet Explorer. In the case of mail we test with other IMAP clients to ensure compatibility. And that's a very important part of what we do so that we can document to customers exactly what the software is warranted to do, and what it's not warranted to do.

So theoretically the IP managers don't have to do as much configuration management to be able to add a suite of separate proprietary products. They're assured that all these things will work together.

That's right. Many people, in fact most people, are migrating from isolated proprietary systems, and one of the big things standards offer is that, if you agree on a standard like IMAP-4, for instance, you can be assured that if you're providing an IMAP-4 server then you can manage the server without needing to dictate what software runs on everybody's desktop.

Tell us what IMAP-4 is.

IMAP is a messaging protocol. In particular, it provides very good support for off-line usage and remote usage. It's actually a replacement for POP. Our messaging server for the last two generations has supported both POP and IMAP.

And you'll continue to support POP?

That's correct. POP is very important and very useful. It's a somewhat more simplistic protocol that basically allows you to download all the mail to the client or keep it on the server.

What IMAP allows you to do is to keep it on the server and download only the headers when you're on a remote link so that you're not downloading all of the mail messages and attachments. You can decide to download the messages without the attachments, or similarly, when you're ready to go off-line--I'm about to pack up for the day from a docking station on my network, for example--I can download all my mail so I've got all the messages and attachments on my system, work with them off-line, and then come back and upload. So it provides much more of a fine-grained control for remote and mobile users than POP provides.

Is everyone using IMAP?

IMAP does have very broad support. I think you'd put it in the category of an emerging protocol in the sense that everybody's committed to support it. Many vendors have support for it in their current products.

After a standard emerges, it takes vendors some period of time to provide full support, but as a customer you have the benefit of being able to ask a vendor if they, in fact, have full certified support for a protocol.

Do you see Netscape as the leader in pushing this protocol?

Yes. In fact we recently made an acquisition of a company called Portola Software, which is a very small messaging company. One of the most important benefits of that acquisition was their engineering lead, John Myers. He was in fact one of the co-authors of the original IMAP spec. His knowledge and experience will be a great help to us.

So along with Java Netscape is pushing a suite of standards which hopefully will gain broad industry support, perhaps excepting Microsoft

I would say that's generally correct, but I think of it this way: Netscape's approach to competition in area after area where proprietary protocols have given customers limited interoperability--basically making them beholden to the vendors of proprietary technology--has been to either innovate or identify relevant standards, to advocate and evangelize them, and to be first to implement them (and to do it in a superior way).

Doing that creates a great deal of customer value. It partly switches the center of power from the vendor side to the customer side. It helps reduce the cost of ownership for all the reasons that we've talked about. We've been successful in doing this not just with Web protocols, but also with IMAP on the messaging side; with IIOP in the application space; with LDAP, the lightweight directory access protocol, in the directory space; and with calendaring (which is in the early stages of protocol development). And you know, one could argue that CORBA has more momentum now than it's had at other points in its fairly long history.

I think one of the things that's helped make us successful is that we've been adaptable as the market has evolved very quickly. We did decide early on to focus on corporate customers.

What year was that?

Very early in 1995, February of 1995. For example, I remember having a conversation with Marc Andreesen where we said, "Look, this is our number one market. This is what's going to lead to the company's growth and our ability to extend the initial browser franchise." I've been on the server side of the house since I got here, and it was pretty clear even then that the kernel of what a Web server could do was already there inside the company. The big question at the time was, how does it need to interact with the corporate messaging environment? Mark was the one to say that the Web server pieces alone--content management and access to databases--weren't enough, that the other major piece required was messaging. That is the one thing that everybody uses, and so acquiring Collabra was really a way to kick-start the mail franchise. It was not so much the acquisition of a product or a technology, but the acquisition of a team. That team filled in the ranks in Netscape with people who were messaging-savvy and knew that market, and we were off to the races.

This team is now an integral part of everything that we do. By early 1996, we were able to have mail, news, and certain Web capabilities out in the marketplace. What’s more, we had begun the process of building messaging and collaboration capabilities that have all the features that enterprise customers are used to in standard groupware, on the one hand, and yet have the support for standards and scalability that the Internet requires.It took until the early part of this year until products that had both of those pieces in tandem were in place and delivered.

What are some other emerging standards you see that will help push this idea of open standards for the customers rather than having them be controlled by a proprietary platform?

Obviously HTTP and HTML are at the front of that list. The Internet InterORB Protocol, IIOP, is another important piece, and right now Netscape, Sun, Oracle, and IBM are working together to support it. Microsoft still has the proprietary DCOM, which is basically what they believe in. Here is an area where there is very broad support for IIOP, but there's not an agreement across the entire industry.

Could you compare those two in terms of just functionality? What does one offer versus the other?

Well one is mature and tested, with years of corporate experience behinf it. And it's cross-platform. The other one is immature, untested, and Windows-only.

What you're emphasizing is not so much the specific features, but the ubiquity across platforms?

And the maturity and the robustness, yes. The whole reason for having such an object framework and a protocol for supporting it is to link disparate systems, right? If you can't link disparate systems, there's almost no reason to have an object protocol.

And DCOM cannot link disparate systems?

DCOM can link only the disparate versions of Windows.

Now, there's a feature. So what are some other emerging standards?

Another is ICAP, the Internet Calendar Access Protocol, although we don't have a spec that people feel very comfortable with. Lotus and Microsoft are co-authoring a spec for that in the IETF Calendaring and Scheduling (calsch) working group, and there should be something ready by the end of the year.

This is an area where there is a standard for the document format, for how the information is stored, but not for how you communicate about that information. The standards development process does sometimes take more time than one would like, but I think religious warfare is not in the best interest of the customers. I think in the calendar space we'll be able to avoid it, and there will be a consensus spec.

SuiteSpot is the general enterprise solution. When do you expect to have things like transaction management and consistency management among replicated files?

Well there are obviously a vast number of ancillary services that will continue to be valuable for customers. There are certain things that need to be a core part of the platform, and there are other things that can be plugged in by the vendors. If you look at things like transactions services and message queuing capabilities, all of those things are provided by third parties today. If you think about the way most enterprise IT is done nowadays, it is in an environment with multiple vendors working together. Fifteen years ago, that wasn't the case. Enterprise software was basically all on the AS400, and you got everything from IBM, or all on the VAX, and you got everything from Digital. But I think this partnership approach works fairly well.

Earlier you asked about replication and file matching, and I think that this is an interesting area. We spend a lot of time talking to customers about what they want here, and basically what people want is to know that there's one master copy of the document that's properly version-controlled, that access control is properly enforced, that you can use the network efficiently and yet still get fast access to a document regardless of where you are and when you want access to it.

Do they want support for mobility too?

Yes, and that's part of providing fast access to it, no matter where you are. So what people have most often looked to, I think somewhat naively, is the Notes replication capabilities, which is actually a very complicated way of trying to deliver those benefits. It has an awful lot of overhead, and magnifies the cost of ownership by two or three times to try and have that environment work.

What we think is a more efficient way is to have a master copy of the document which is version controlled, and where access controls are managed by the end user, not by a system administrator. It is also important that your entire intranet is searchable, rather than an individual Notes server. To search a corporate network based on Notes--let's not call it an intranet--you basically have to replicate everything to one physical machine, have a big enough machine to be able to handle that, have all of the machines talk to one another at least once a day, and you've still got lag time.

You know this is still the approach that's used in massive engineering schemes, such as Boeing for the 777 design, to essentially put all the CAD drawings in one central place so everybody could access them. Do you see your approach revolutionizing things like engineering design as well as normal business?

Oh, I think it absolutely does. One key is that although there is a central repository for a set of documents, each project team might have its own repository.

So you are talking about a distributed environment here.

I would think of it as de-concentrated. Everybody can put data nearly wherever they want; the naming scheme and the search capabilities make it possible to still find things and get fast access to them. What we typically find in practice is that one project team will create a place for all of their associated things, but another project team will have its own place for doing this. If I'm a manager in this environment, I can still do a search and query all of those databases and be linked immediately to the right repository where I know the most recent version is kept.

How does that name serving mechanism work? How is it, for example, that if I'm on this database here and I want to assign a file name, it's clear that it's going to not conflict with anything anywhere else?

Fundamentally, what we're really talking about is URLs as the primary naming service, and so you've got the host name.

So you're using the intranet infrastructure as the name server?

Absolutely. When you think about what Notes was trying to do 15 years ago--when there wasn't HTTP, or HTML as a primary document format, when there weren't URLs as a primary naming service or DNS as a naming service--they invented all of those things in their own proprietary way, and they wired them all together. Now suddenly Internet standards dictate the way that everybody wants to do these things across all of their systems. What we're saying is that this infrastructure is mature, it does work, it's been proven to scale, and we've been able to add the features that, as a corporate or enterprise customer, you've always expected from your e-mail system, your publishing system, and your application environment.

Do you think that Domino solves any of these problems that Notes has?

From what I can see Domino was intended to do a specific set of things. Domino was intended to take people who've invested years and a lot of money in Notes and allow them to publish certain parts of their data with limited functionality to people outside who aren't part of the Notes environment. I think it has served their customers well in its way. On the other hand, one would still ask the question, is it a great Web server? or a great messaging server? Or the right way to design a new intranet? I think the answer to all of those questions is no. Architecturally it's not a good strategy for most customers. We've found a lot of places where people aren't rolling back Notes, but they're stopping and building the rest of the organization around intranets, and Domino is effectively a useful gateway in that infrastructure.

What are you doing about clustering? There's a big fanfare about NT, how they're going to be putting at least two machines at a cluster. What's Netscape's take on that?

Clustering two NT machines together is a nice start, but the point is that for customers who have high scalability requirements, the reality is that UNIX systems deliver those kind of performance numbers on one, two, four, eight-way machines today, and many of those UNIX operating systems already have clustering services. So it is obviously a requirement for Microsoft to continue to enhance the scalability they have, but their marketing efforts have clearly demonstrated that there are scalability issues.

We want our customers to be able to scale regardless of what operating system they use, and we want people to be able to mix massively scalable systems in the headquarters office that may be running UNIX with moderately scalable systems in a remote office that may be running NT, and to do all that in a seamless way. That's the thing I think is most important. So I don't think clustering is a watershed event for customers or even for Microsoft for that matter.

You're making the case for a win on the server side because you're providing a sort of virtual network operating system that cuts across platforms, and Windows doesn't offer this. Furthermore, Windows doesn't have a big installed base in network servers, so there is actually a possible win here. But there's the client side where Microsoft does have a monster of an installed base. If Netscape wins in some sense on the server side, together with your partners like Sun and IBM, will the client side still stay Microsoft?

Well it's a big question with a lot of pieces, but let me take a couple of them. First of all, from a technical perspective, the big thing that standards have brought is that client and server are dis-aggregated. In a client/server world, the two have always been one and the same business decision. You can think about Notes as the obvious example. Who provided a Notes client other than IBM? Well the answer is still nobody. So that is a big win technically.

You know, we do test with Internet Explorer, we test on the messaging side with Eudora, we do interoperability testing with Novell. We're building for a heterogeneous world. From a technical perspective it is, I think, possible for us to be very successful in the server business--really the leader of the server business--regardless of what happens on the client side.

That having been said, even if you assume absolute interoperability, from the protocol perspective there are a bunch of places where the client and server can be related in effective ways. For example, we build all these content management features on the server side, which is a much more efficient way of delivering these services than, say, trying to deliver Front Page to everybody's desk. But you still need to have user interfaces for those on the client so that the end user sees them, knows that they're there and can learn how to use them. So the two are related. And it is clearly true that Netscape's success in the client business has contributed to our success in the server business. We want those to be mutually reinforcing over time.

The other place where I think it is important to say the two are related is if you look not at the technical side, but at the sales and marketing side. Clearly the combination of perception of company momentum and being invited to what Jim Barksdale calls "every dance," are critical things--that the company continues to be a leader technically and to be perceived by customers as a leader. If a Fortune 500 company is making an intranet decision, we want to be sure that Netscape participates in that discussion in every case. Today that is the case, but it's the combination of a strong client business and a strong server business that really ensures that.

But don't the clients have to implement the protocols that you're pushing as standards?

They absolutely do.

So there's a strong connection on the client side with what you're pushing on the server side.

No question.

If most of the clients are Microsoft and they don't implement IIOP, for instance, they implement DCOM, then does it matter what you do on the server side?

Part of the question there is about Microsoft strategy, and I'm not the expert on Microsoft strategy. But I can say there is very strong customer momentum behind these relevant Internet standards. The scenario that you've just described, which is one where Microsoft tries to shut down open standards and revert back to their proprietary world, is in fact one that is I think very real for customers. One could speculate and see a pattern that says Microsoft is willing to do open standards, but grudgingly. There's always this tension between open standards and their proprietary technologies, and what they would really like to do is do enough of the open standards to acquire the customers and then lock the customers back into proprietary systems.

They want to put a seal around this openness and close it back up.

Right. Sort of a containment strategy. If you look out over the years, it's a very real issue. There's no question that it is important for customers, who believe that open standards architecturally are the way that they can best manage their intranets and get the lower cost of ownership. With Sun and Netscape and Oracle and IBM all pushing that from the perspective of enterprise software and Microsoft dragging its feet but coming along, the likelihood of the success of standards-based intranets is quite high. If that's the case, then Netscape prospers no matter what.

All of these standards are not created equal, however. Let's take the momentum behind CORBA. Typically when a standard does not gain momentum for so many years, then it just doesn't make it. Furthermore, it seemed that with Java all of a sudden we had everything that CORBA had been promising for so many years.

IIOP is basically a subset of CORBA, like LDAP is a subset of X.500. I think one could say CORBA and X.500 were similar in that they were reasonably mature, had been around for a while, and didn't have great momentum partly because of their complexity (the design was to put in every feature or capability that anybody ever wanted). All of a sudden nobody could do a full implementation of CORBA, nobody could do a full implementation of X.500. In the context of this new intranet space, the way to propagate them that has been successful is basically to streamline them to do the stuff that's most important.


Exactly. And then what you might call comoditizing them. It's the combination of streamlining them and making them lightweight enough that they can be deployed everywhere. Then you've got CORBA not only in every Enterprise server, but through IIOP in every Communicator browser. So if you say, well, there are going to be effectively 40 or 50 or 100 million CORBA client seats in the universe, right, and between Netscape and Oracle and Sun and IBM, there will be another half a million or a million CORBA server seats, all of a sudden you've got an installed base and a level of momentum that has never existed before. It's never even been close to that-CORBA has been reserved for very high end development, and only a small number of people could do it.

Do you see CORBA as creating the ability to architect distributed systems? It's a very formal messaging bus, isn't it? Is it going to create the kind of flexible environment that is going to be necessary in this network environment?

This is a really interesting question, because you can go back and look at HTTP and its success as an example. On some level, one of its great advantages was the fact that if the packet doesn't show up, you can just ignore it and keep going. You don't need 100 percent fidelity for the client to accept the message and operate on it. But I think that what people expect of HTTP-and here I mean the kind of systems for which HTTP is going to be primary transport-is pretty different from what they expect from IIOP, or the kinds of systems in which IIOP is going to be the primary transport. People love the fact that HTTP is everywhere, but does it have the reliability? IIOP will be used in the context of a transaction that is part of a mission-critical application, where you need reliability. It isn't necessary in the context of display or transport between client and server. In many cases HTTP is a better way to do that. Seamless interoperation of the two provide a pretty strong pair.

I think you just answered a question we didn't ask, which is why you've included CORBA in Netscape One. You've got Java, you've got the Java classes, you've the equivalent of an ORB with RMI. Why CORBA? And what you just said is you like the protocol better than HTTP.

For certain functions and certain services.

And the name serving function of ORBS, do you like that better than what you have with Java?

I don't think the big item of concern is that there is more than one way to do things.

Okay, fair enough. We wanted to ask you about the plug-ins. Client plug-ins have been proliferating and confusing us all, and now it seems like we're moving to a server plug-in environment away from the client. Is that true? Microsoft's making a strong move to create component ware on that server side--what's your strategy in this area?

Our application server has something we call the Web application interface, which includes both what you can think of as active APIs for people building applications to and accessing those functions, as well as what you might think of as extension APIs for people trying to extend the functions of the server. We've provided those capabilities in every major release of our server family except for the very first release, the 1.0 version. This is the way that people are going to build enterprise applications-not by building a proprietary client plug-in and distributing it to every client. Client plug-ins have most often focused on certain content or media types.

There are people in the enterprise world who do believe that streaming media with Real Audio, for example, or even multimedia with Shock Wave, are important distribution vehicles for presentation of things like a CEO's speech. But the fundamental enterprise applications have nothing to do with that. My point here is that I wouldn't think of the server application platform and plug-ins as being related to the client plug-ins in any way.

I want to note two other things, one of which is that server plug-ins are by definition easier to manage because you're managing them on the server in one place, rather than on every desktop. And two, when we talked about the advanced management services of SuiteSpot professional edition, one of the things that you can do is say, "Our standard configuration for Communicator is version 4.02 with Adobe Acrobat and Real Audio installed, and have the plug-ins downloaded and installed from a central place without the end user doing anything other than starting up Communicator.

So what is a servlet?

A servlet is a Java applet executed on the server.

You've got the server plug-in, you've got the server. How do these correspond? Are they just two different ways of doing the same thing or do they do different things?

Well, as I said earlier, our big concern is not about having too many ways to do things, but about having good ways to do things. You can write plug-ins and applications on the Enterprise server in C, in C++, in Java, in JavaScript. And ideally you'd like uniform services to be available in all of those, and for the programming language not to be a barrier.

The Java servlet API that JavaSoft has proposed provides a way for a Java applet running on the server to access services that are either local or on a remote system, like, for example, a database running on a separate physical system. You can also use the Java server to access the database. We also provide in Enterprise a JavaScript-based database connectivity that has native drivers to all of the primary databases like Informix, Oracle, Sybase, DB2, and ODBC-enabled databases, so a developer have their choice of what they think is the best way of executing a particular function.

Is it correct to say that Netscape is a big believer in push technology?


Is that also for software delivery?

Yes. You know we have a very close partnership with Marimba.

What do you think of their recent deal with Microsoft?

Marimba is in the difficult position of trying to manage its business in the context of a bipolar universe.

This is reminiscent of the Cold War.

That's right. It is a challenge for many small companies. We have worked really closely with them, so I think there are two things to say. One of which is push technology means different things to different people. You have push for delivering content, and then for mission control, which is the component within SuiteSpot Pro that we've talked about that allows you to do distribution of software updates and plug-ins down to the client.

The management of updates to Communicator is done using HTTP and HTML and JavaScript today. Now there's another business problem that customers have, which is that if I'm building a downloadable Java applet to provide access, for example, to my finance system--doing budget updates and the like--I want there to be application logic on the client. I want people to be able to manipulate the data on the client; I don't want to do all that on the server. So basically I've got a Java applet that I want to be persistent on the client, and I download it each time. Marimba's a very good solution to that problem. We do ship the Castanet Tunor component in Communicator today, and we resell the Castanet Transmitter separately--not as a bundled part of SuiteSpot, from the server side.