IEEE Internet Computing


Tom Malone on the Implications of the Digital Age

"What if you took the Internet, not just as a technical infrastructure for enabling business, which I think it certainly will be, but as an organizational model for how to manage a business?"

Tom Malone

Coordination is managing dependencies between activities. With these words Tom Malone, professor of Information Systems at MIT's Sloan School of Management, set the stage for his life's work investigating the nature of coordination in complex systems and the impact of information technology on how people work together. Malone's work draws on a variety of disciplines including computer science, organization theory, management science, economics, linguistics, and psychology. A pivotal focus is to examine coordination structures in very different kinds of systems--distributed computer systems, human organizations, and insect colonies have all served as a basis for Malone's investigations.

Malone has asserted that "relentless improvements in information technology continue to reduce the costs of communication by a factor of 10 every few years." For this reason understanding what new forms of organization will emerge from our electronically connected world is central to his work. Internet Computing's EIC Charles Petrie and Acquisitions Editor Meredith Wiggins met with him at his office at MIT on April 7 to discuss the impact of the Internet on organizations of the 21st century.

We are delighted to present his far-ranging ideas to the readers of Internet Computing.

I'd like to invite you to tell us how you came to form the Center for Coordination Science.

At the time we started the Center in about 1989 I had an increasingly strong intuition that there were a lot of connections between questions discussed mostly independently in a number of different disciplines: computer science, economics, organization theory, management--for that matter, even biology. In all those cases, different disciplines were worrying about problems that were in some ways the same problems, questions like, "How can scarce resources be allocated among multiple competing uses?" "How can tasks be assigned to individual actors?" "How can sequences of actions occur in the right order?" Once you begin to think in this way, it becomes clear that much of computer science is about how to solve these problems, as is much of economics, and almost all of management. We weren't the first people to ever notice possible connections, but it became increasingly clear to me that there were some deep structural similarities among these problems as they occurred in different kinds of systems and that these structural similarities hadn't been systematically attacked. That was part of the motivation for creating the Center.

Another part was that these theoretical problems had practical applications that were becoming more and more important and pervasive in the late 1980s, and that have become, if anything, much more so since then. For instance, at that time I don't believe the term "groupware" was widely known. The term "computer-supported cooperative work" had been used to some degree in a research context. People were just beginning to try to figure out what it would mean to use computer systems not just as heavy-duty, back-office transaction-processing tools or as individual productivity tools, but instead as networks that connected people to each other. The growth of networking was clearly a case where this general phenomenon of collab- oration was important, and where understanding it at some deep level had the potential for very important practical payoffs.

At the same time, at a somewhat higher level than groupware, people were just then beginning to sense the possibilities of electronic commerce. In those days it was mostly called EDI. Occasionally people would talk about interorganizational systems. In 1987, we published an article about "electronic markets,"1 and people were just beginning to sense the possibilities of that. But almost no one then talked about electronic commerce being as important as many people are now convinced it will be.

Here again it seemed very clear that understanding the problems of coordination and how different actors could work together on common problems had the potential to have huge payoffs in terms of how we design the technologies to support electronic commerce and other interorganizational relationships.

In fact, it was beginning to become clear to some of us in the 1980s that the capabilities provided by increasingly powerful and increasingly inexpensive information technology were taking us across a kind of frontier. On the other side of this frontier, new ways of organizing that would previously have been unthinkable or completely infeasible were one by one, and in some cases suddenly, becoming dramatically better.

The barriers were falling.

Right. That's perhaps the most important practical implication of the kinds of things we've been trying to understand in coordination science. In some sense, business and, for that matter, other organizations are all about coordinating the work of different people. To understand the fundamental changes in how work will be organized in the 21st century, we need to ask questions like: What are the design principles for organizing the work of multiple actors? What are the new possibilities? What are the constraints?

There's a kind of analogy I like to use for this. When you are designing things like cars or computers, you can think of there being a kind of "design space" for the thing you are designing. With cars, for instance, the design space has dimensions like: What color is the car? What kind of engine does it have (diesel, gasoline, electric)? How many wheels does it have, and so forth. If you think of human organizations as being--in some sense--designed, then you can think of there being a design space for them, too: a space of possible designs for organizations. If you look at things in that way, you can say that at a certain cost for communicating and computing things, some regions in that design space are economically feasible and therefore desirable to explore. But other regions are completely infeasible and usually not even sensible to think about. What the improvements in information technology are doing is greatly decreasing the constraints on what kinds of communicating and coordinating are possible in organizing human activity. These changes, therefore, are greatly expanding the feasible regions of the design space for organizations.

Are you saying that organizations should be modeled on what the new information technologies make possible?

No, I'm not saying that companies are or should always be modeled on the computer. What I would say is that by analyzing and thinking about how computer systems are designed, we can understand things that can also help us design human organizations. The history of human thought, particularly in some of the human sciences, does in fact follow a path that is linked to technological developments. In psychology for instance, when the steam engine was a big, new invention in the technological sphere, Sigmund Freud developed a theory of the human psyche that was basically a hydraulic theory: pressures build up one place and have to be let out some other place. When in a later era the computer became one of the dominant technologies in the world, then suddenly computational and information-processing models became much more prevalent in psychology. And, I think, the same thing is true in organizational theory. The old mechanical models of organizations are increasingly being replaced by information processing models.

Let's examine these ideas in the context of your research, starting with the Enterprise work in 1986­1987.

Enterprise2 was a system I helped develop when I was at Xerox PARC and continued to work on after I came to MIT in 1983. The basic idea of Enterprise was that there were at Xerox PARC large numbers of powerful personal computers or workstations sitting around idle much of the time. At the same time, there were computer-intensive tasks that people wanted to get done. So how could we take advantage of that idle computational power to do those tasks? The problem was, perhaps, first recognized by John Shoch and Jon Hupp at Xerox PARC in the early 1980s, when they created programs they called "worms" to use other people's computers that were idle during the night.3 What was novel about our approach was that we said this was a coordination problem, and therefore to solve it let's look for ideas in other systems that have solved similar coordination problems. In this case, we chose to look at economic markets. Markets clearly provide a way of allocating scarce resources, which in some sense was the fundamental problem at issue here. So we basically borrowed the idea of competitive bidding as a way of allocating idle computational resources among the tasks to be done.

Which has now become an important paradigm in agent coordination.

Correct. We also borrowed from work done on Contract Nets by Randy Davis and Reid Smith in the late 1970s. In their parallel processing system, problem solving was distributed among decentralized, loosely coupled processors, which communicated about task distribution and answers to subproblems in order to "negotiate" the solution to the problem.4 We took this further in terms of analyzing the different protocols you might use and applying it to the problem of allocating scarce computer resources. After we did the Enterprise system, Bernardo Huberman and some of his colleagues at Xerox PARC have developed a system called Spawn that went further still in that direction.5

So here is an example of borrowing ideas from human organizations to help design new computer systems. In other cases we've borrowed ideas from computer systems to help understand or design human organizations.

Can I make a distinction here? It doesn't sound like you borrowed the idea of the organization--a hierarchical arrangement versus a market, for example--it sounds like you borrowed a coordination strategy that actors would use--each person should behave in this fashion, and you would have coordination as a result.

You're getting at a really important issue here. I think you're saying there is a difference between top-down, hierarchical coordination and coordination that emerges "bottom up" from independent actors following a set of rules. If your goal is to coordinate some set of activities in order to achieve some goal, there are a variety of coordination strategies you can use to do that. One strategy is to set up a hierarchical structure in which some actors tell other actors what to do. Another strategy is to create a set of "rules of engagement" through which independent actors interact with each other in such a way that the same overall goals are achieved.

I think this distinction between hierarchical or top-down coordination processes versus nonhierarchical-- bottom-up or emergent coordination processes--is an important distinction, and it's one that our work has touched upon a number of times in different ways. One interesting way of thinking of this comes from Mitch Resnick at the MIT Media Lab.6,7 Most of us today have what Resnick calls the "centralized mindset." For instance, when we see a flock of birds flying in formation, we tend to assume that the bird at the front is the leader, and that the leader is somehow organizing the flock to fly. In fact, as far as we know from biologists, each bird is simply following a set of simple little rules that result in the emergence of this pattern. But our natural inclination when we see a problem, especially an organizational or management problem, is to say, How can we solve that problem by putting somebody in charge of it?

I think in principle in every case, and in practice in more cases than we suspect, anything that can be coordinated in a hierarchical way can also be coordinated in a nonhierarchical or emergent way.

That's an important claim. Have you done any simulations, or is that "Malone's conjecture"?

I've never stated this claim this strongly before. It's my conjecture, and a hypothesis to be tested. In practice, clearly you don't always want a nonhierarchical organization, and an important component of coordination science is to articulate the trade-offs between approaches: when one and when the other would be better, and what the factors are. But I think nonhierarchical, decentralized organizations are much more often possible than we usually assume. In fact, one of the interesting practical questions here is how we can create more decentralized, emergent organizations for human activities.

To use your meta-phor of an evolving design space, has the Internet made more of these emergent-type coordination processes possible, as opposed to hierarchical ones?

Yes, exactly. In fact I have two other pieces of work that both touch on this question in different ways. The first was a 1987 article called "Electronic Markets and Electronic Hierarchies."1 In this article, we gave an economic argument for why the decreasing costs of communication and coordination enabled by information technology would increase the number of situations in which market-like--you could say emergent--coordination was more desirable than internal, hierarchical coordination. In other words, we predicted that information technology would make it more desirable to buy things rather than make them internally, to outsource more and more of the activities that might previously have been done inside a large company. This, in turn, would lead to the viability of small companies in more situations.

In 1994, we published an article8 in Manage-ment Science in which we analyzed econometric data and found that there appeared to be a correlation between the decreasing size of firms in an industry and the in-creasing use of information technology in that industry. So, the more IT, the smaller the average firm, with a lag of about two years.

That was consistent with the original argument. Certainly all the things we're seeing today--electronic com-merce, networked orga-nizations, outsourcing more and more things that aren't your core competency--all of this is consistent with our hypothesis and can perhaps be explained by the fact that information technology is reducing the cost of coordination.

Do you think it's happening more slowly than it could?

I think the answer must be yes, it's happening more slowly than it could and also faster than it could. If you say more slowly than I expected, perhaps yes. However I think the changes in the Internet in the last three years or so have taken many people by surprise in terms of how quickly things are happening. It seems to be a kind of sea change, what a chaos theorist might call a point of punctuated equilibrium--when you go for a long time with very little or very slow change, and then all of a sudden everything changes almost at once, and you're in a new world.

Let me ask the same question in another form. If we look at the example of distributed project coordination, people are still using systems that are not distributed and that have very little functionality. In an age when companies are collaborating on huge projects, why isn't information technology being leveraged?

I think there is a systematic explanation for that. If you look at word processing, there is some amount of systemic interdependence in terms of having the software and having the hardware and having somebody to help you fix it, but the spread of word processing is basically a one-person-at-a-time process. This means if you keep convincing individuals one at a time to adopt it, you move down the "S-curve" of adoption more and more rapidly and get more and more people involved. But with something like project coordination, or for that matter e-mail or even telephone systems, the more other people there are, the more value everybody gets.

E-mail was around for a long time before it took off.

Yes, and so were the telephone, and the fax. Because all these technologies require groups of people to adopt before they are very useful to anyone, they all grew fairly gradually at first. But, in each case, the exponential benefits of big communities to communicate with eventually led to a really sharp acceleration in the growth of the technology. In a similar way, with project coordination systems, the unit of adoption is not an individual or even a pair of individuals, it's a whole group.

What is your conclusion from this?

The conclusion I draw is that even if such a technology has been around for a long time and the potential benefits are quite obvious, it is particularly difficult to get these types of innovations started. For anyone to benefit, nearly everyone in the group has to adopt them. That means you can go a long, long time before you achieve very wide penetration. But at some point as the costs get lower and lower and the potential benefits continue to be high, you'll come to a place where all of a sudden a whole bunch of people will change almost all at once.

An analogy for this is the work Jonathan Grudin has done on group scheduling systems. He did a classic paper9 almost 10 years ago on why group scheduling systems aren't used, and another paper fairly recently 10 on why some are finally being used. My interpretation of his work is that the benefits have always been there, but the costs have gotten lower, the interfaces have gotten better, the necessary hardware and software have gotten more and more prevalent. So at some point you get to a place where, with benefits constant and costs approaching zero, people start using it.

So here's a hypothesis for you. The hypothesis is that we're on the verge of seeing project collaboration tools being widely adopted. The Web, the readable Web, is step one. That now eliminates most of the infrastructure problems for access, for letting you see it. What we don't yet have on the Web is easy capabilities for writing.

Then the next step would be easily modifiable Web pages, so that you could continue a message or put up a new page saying you have a task and give the description.

Right. And at some point I think there will be enough functionality in a very widely deployed infrastructure like the Web that this stuff will take off. Will that be next year? Three years? Five years? It's hard to predict exactly, but I think it will take off at some point. Because then it will no longer be a matter of having to go through a huge installation process to get all the software on all the desks of all the people in an entire project. It will be just a matter of saying, "Go to this Web site."

Here's a really radical hypothesis for you. As communication and coordination technologies become better and better, it becomes possible to coordinate the work of more and more people on larger and larger projects more and more effectively. If you take that to its extreme, you have everybody in the world working together!

Now we're talking about a truly grand vision, one in which we could make massive global changes. Ultimately a Dyson Sphere! Way in the future of course.

Yes, what Teilhard de Chardin called the "Omega Point," the culmination of the process of evolution.

Let's turn to another research project, perhaps the one for which you are most famous and one that certainly relates to project management, Information Lens11,12 and Oval.13 Your very fine work in Oval was never sufficiently recognized.

That's nice to hear. In Information Lens we used intelligent agents to filter e-mail messages, and this did become well known in the middle and late 1980s. Oval was a generalization and a follow-on to the Information Lens work, but as you point out it wasn't as well known even though I thought it was a lot better.

The name "Oval" is an acronym for the four key elements of the system: Objects, Views, Agents, and Links. One key goal of the system was the integration of all four of these kinds of capability into a simple and easy-to-use system. An even more important goal for the system was for it to be "radically tailorable." By "tailorable" we meant that end users should be able to easily tailor the system to do what they wanted it to do without ever having to do "real" programming. By "radically" tailorable, we meant that users should be able to make very significant changes, not just cosmetic ones. Perhaps the best-known previous example of a radically tailorable system is a spreadsheet. By changing formulas behind the cells, end users can create very different kinds of applications: from personal budgets to sales projections to investment analyses.

Our goal was to make it as easy for end users to organize and share different kinds of information on networks as it is for them to create new spreadsheets. In one test of this, we were able to create a wide variety of applications--from project management tools to online calendars to intelligent e-mail filtering--using only the primitive capabilities that were available to end users.

We see the technologies of Information Lens in current products like Eudora. What has happened to the technologies of Oval--can they be found in Lotus Notes?

Oval and Lotus Notes were a case of independent invention. The developers of Notes focused on different parts of the problem than we did; the underlying plumbing for replication, for example, which we spent almost no time on. We spent more effort on the upper end of the problem, that is, the user-interface capabilities. I do think that some of the ideas in Oval (like user-definable object types, user-programmable agents, and easily tailorable views) could still be usefully commercialized, either in an extended Web browser or in some future generation of a tool like Notes or Domino.

Originally you defined coordination science as managing dependencies among activities. Now I believe you're concentrating on the Process Handbook, matching processes to coordination models. Can you explain more about this, and how it relates to coordination science?

In this project14 we are looking at how in any business process (in any process, actually) you can analyze what you might call the "deep structure" of the process. We use that term as an analogy to how linguists use the term. When linguists talk about the deep structure and the surface structure of a sentence, they say that the surface structure is the particular sequence of words and the deep structure is the underlying meaning. In general, you can have multiple surface structures for the same deep structure. For instance, "John hit the ball," and "The ball was hit by John." Two surface structures, same deep structure.

By analogy, we say that a business process can also be thought of as having both a surface structure and a deep structure. The surface structure is the particular sequence of actions, and the deep structure is the underlying meaning or the goals and constraints. In general, again, you can have multiple surface structures for the same deep structure. By the way, this applies not just to business processes, but to other processes like software processes or programs.

The basic idea is that you can start with some particular sequence of actions, and then analyze that down to some deeper structure being embodied in that particular surface structure. Once you get down to that deeper structure, you can then generate many other possible surface structures, many other sequences of actions, that would fulfill the same goals or embody the same deep structure.

The language we use for describing these processes is basically activities and dependencies. We say that both the surface structure and the deep structure can be represented as a set of activities and the dependencies or relationships, or interdependencies, among them. At the surface structure those dependencies are very elementary, like this happens before that, before that, before that. Things like simple precedence. At the deep structure those relationships may be more complex. But even there, our current hypothesis is that there are three elementary kinds of dependencies out of which all other important dependencies can be expressed, either by specialization or combination of these three elementary types. The three types of dependencies are flow, sharing, and fit.14

Flow means when one activity produces something that's used by another one, for example, when you write a report that someone else reads. Sharing occurs when multiple activities all need to use the same (limited) resource, like a machine on the factory floor or a fixed amount of money. Fit occurs when multiple activities produce things that have to fit together. For example, when multiple engineers are designing a car, there is a dependency between the engineer designing the engine and the engineer designing the body because the results of their work have to fit together in the same car.

We define coordination as the "management of dependencies among activities," and a key element of our approach is that, for each different kind of dependency, we identify a family of alternative coordination processes that can be used to manage dependencies of that type. For example, whenever there is a sharing dependency, you can, in principle, manage it with any of a variety of coordination mechanisms: first come/first serve, priority order, managerial decision, market-like bidding, and others. And each of these coordination mechanisms can be specialized in many different ways for different kinds of situations.

So you have coordination mechanisms that were originally developed for computer systems that are included in your process handbook. How have you applied this work so far?

One of the things we've done recently to explore how these notions can be applied is a special project with one of our research sponsors, AT Kearney, the management consulting subsidiary of EDS. The situation we analyzed was one where their client wanted to improve the hiring process in their organization. We asked, what's the deep structure of the hiring process?

One of the things we found was a case of shared resources: the recruiters who searched for different positions to be filled. That was a sharing dependency, which led us to consider all the different possible coordination mechanisms for managing a sharing dependency, one of which was market-like bidding. That led us to start developing a scenario for how you might use a kind of market-like system to allocate the scarce resource of recruiter time across the multiple positions to be filled.

For instance, you could let the hiring managers post descriptions of the positions to be filled, and then you could let the recruiters bid on which positions they wanted to work on filling. You might even say they could bid in terms of how long it would take them to fill the position, with some penalty for taking more time than they said and maybe a bonus for taking less time. One of the advantages of this approach is that it allows the system to take into account useful information that would otherwise not be considered. For instance, let's say a manager posts a new position for a C++ programmer, and that unknown to the manager, a recruiter has just filled a request for a C++ programmer in another division. Let's say the recruiter also has three other very good candidates who weren't selected for the first position. Now, the recruiter might say "I can make a low bid since I'm likely to be able to fill this job in a very short amount of time." That kind of information, which would otherwise almost certainly never be surfaced, can be taken into account.

So this takes us back to where we started, to emergent coordination processes. Can you talk more about how this will change management structures?

Well, I have an article in Sloan Management Review this year called, "Is Empowerment Just a Fad?"15 that gives an argument for why, as information technology reduces the costs of coordination, it should in many cases lead us to go through three stages of decision-making structures. The first, which we call "cowboys," are independent decentralized decision makers, like cowboys or cowgirls alone on a horse. As the cost of communication falls, it becomes economically desirable to bring information together into a central place where you can take advantage of essentially global information to make those decisions. These centralized decision makers we called "commanders." You could say that this transition from decentralized to centralized decision-making is basically the history of business in the industrialized economies of the world in the last century. This centralizing of decision making was made possible by communication technologies like telegraph, telephone, and so forth.

As communication costs fall even further, there should in many cases be a third stage where, rather than having all the information brought to one point for centralized decision making, it's now cheap enough so that essentially everybody can have access to all the information.

What this enables is "decentralized connected" decision makers, or what we called the "cyber cowboys." It's now economically feasible to have large numbers of people well enough informed that they can make good decisions. You have the informational advantages previously available only in a centralized system, with the motivational and other advantages previously available only in a decentralized system, and so you get the best of both worlds. That's an argument for why we think empowerment is not just a fad. Instead, we think that advances in information technology fundamentally change the economics of decision making in ways that make the kind of emergent decentralized decision-making systems we're talking about today much more desirable.

Another point we make in the paper is that the term empowerment only goes halfway. Empowerment implies that somebody on top gives the power to somebody lower down. Actually, what we call radically decentralized systems are those where the power originates everywhere. Everybody has the power to begin with, and they may delegate some of it to some centralized place for certain purposes.

And do you see the Internet as maybe the epitome of this kind of system?

Exactly. One of the best examples of that kind of system now is the Internet. Where no one is in control, no one can shut it down. In fact, you could say that all the Internet is, in a certain sense, is a set of protocols. Anyone who follows those protocols can play any of a number of roles in the system, service provider, service user, network provider, and so on.

The Internet as a process is definitely driven from the ground up. Of course that may not remain the case.

It is a big open question, but I think even if the Internet collapsed tomorrow the point wouldn't go away that we now have an existence proof that a very large system can be very successful with very decentralized organizational principles.

Right, and it is exciting to many people.

Yes. In fact, one question is whether the success of the Internet is in no small part due to its decentralized nature. For instance, a question I like to ask audiences is, if AT&T had been running the Internet, could the Internet possibly have grown as fast as it has, doubling every year since 1988? Would AT&T have been capable of keeping up with doubling demand every year?

Could any centralized authority have supported such a complex, large process?

Right. And I often talk to management audiences about the Internet as a model for how decentralized organizations can be designed in the future. Many, many managers--as do the rest of us--have this centralized mindset. And the Internet I find is a really useful example to kind of beat people over the head with the possibility that in fact it isn't always necessary to have centralized control to do good things. The Internet demonstrates that you can have very large successful systems without it.

In fact, you can even talk about what this might mean in organizations. What if you took the Internet, not just as a technical infrastructure for enabling business, which I think it certainly will be, but as an organizational model for how to manage a business? What if you took the technical architecture and the governance architecture of the Internet as a model for what other organizations could be like?

Don't most business people throw up their hands in horror?

Many of them do. But in fact, I'm trying to take this thinking a little further. One of the examples I've used is a consulting company like, say, McKinsey. McKinsey is one of the largest and most successful management consulting firms, and it already embodies some of these Internet-like principles. (In fact, many consulting firms do.) In McKinsey, for the most part, no one at the top of the organization tells their partners what to do. They don't tell partners which clients to go after or what kind of work to do. The partners are essentially independent, autonomous decision makers, like the people operating on the Internet. What the McKinsey organization does is establish the interaction protocols between these more or less autonomous entities or agents. For example, it establishes a lot of cultural expectations about what you do when another partner calls you on the phone. You answer, you return the call, you try to be helpful. The organization also establishes a set of protocols for the selection process at many stages in the organization. Then, because you know that anyone else at a certain level in the organization has gone through a certain rigorous selection process, you can make a whole lot of assumptions about what kind of person you're talking to, even if you've never met them before.

The analogy I'm making is that, just as the Internet defines a set of interaction protocols, one of the most important things a consulting organization like McKinsey provides to its highly autonomous partners is a set of agreements about how to interact with each other. And within that communication framework, within those interaction protocols, there's very little centralized direction about what you do. The establishment of that framework for communicating can be a very powerful enabler for lots of very good emergent work.

How do you react to people like Bob Metcalfe, who talk about the need to find effective ways of building in consistent pricing models and management structures for the Internet?

Well, I have not talked to Bob about this at any length. But let me say this much. In some sense the very definition of coordination, in my sense, has the word "management" in it. Whenever there are interdependencies, as there certainly are in the Internet, they need to be managed in some way. What I would not agree with, and I don't think Bob Metcalfe would say this either, is that management has to be centralized in every case.

I think there are often decentralized ways of managing many of these dependencies. In the case of the Internet, I think there may very well be a place for pricing mechanisms as a way of managing some of the dependencies that are currently either unmanaged or under-managed. That's not to say that it has to be centrally managed.

I believe Metcalfe would say that the free market should determine pricing.

And I think one of the most promising places for us to look for ideas about how to create decentralized systems with desirable emergent properties is to markets. That's not the only place such things exist. They also exist in organic, biological systems like ecologies and population and so forth.

Any combination of cells that work together to create a functioning body.


You've talked about how the Internet, groupware, agents, these key technologies are going to change our economic models. Do you have anything to say to software engineers about the kinds of systems they should be developing for these emergent systems?

Well, let's see. First of all, systems that are useful in organizations have to be easy to use at a simple user-interface level. That's a lesson that was definitely not well understood 10 years ago. Perhaps it's becoming well understood now. Systems like those we're talking about here today--not just individual productivity tools, but group organizational and societal-level coordination and collaboration tools--require a different level of interface, not just a user interface, but what I called in a paper many years ago an "organizational interface."16,17 In other words, the user-interface questions don't go away but you have another whole level of considerations that you have to take into account to successfully design systems like this. Among the kinds of considerations that are important are the kinds of incentive questions that Jonathan Grudin has talked about. I'd say these are coordination questions. That is, you have to worry not just about how individuals connect with their computer, but about how a computer system can provide (or, at least, be consistent with) both the informational and motivational elements of a coordination process involving multiple people. I guess it's kind of obvious from what we've said so far, but I think one of the places to look for inspiration in doing such things is to a body of theory and practice about coordination, drawing upon examples in other kinds of systems.

In your article in Scientific American18 you point out that information sharing is starting to eliminate some middle management. Do you think the explosion of the Internet is going to take that further?

Yes, but I think the changes will be more complex than a simple statement like "middle management will be eliminated." Maybe the middle managers of the future will be internal entrepreneurs running internal cottage industries with their own bottom lines. Or, perhaps, nearly everybody will be in business for themselves, contracting to provide services to each other and to other organizations.

This has huge social ramifications. Aren't you talking about a world where there may not be a place economically for many people?

We could do another whole interview on this topic. Now you're getting into some of the things we talk about in our new Initiative on "Inventing the Organizations of the 21st Century." One of the scenarios we did was of small companies, large networks.19 We said, what if work that used to be inside large companies were organized in lots of potentially large but very temporary networks composed of very small companies or even individual contractors? That might well have some real advantages for economic efficiency, for flexibility, and so on. But it could potentially have some real disadvantages from the point of view of individuals.

For example, at the minimum, it could be a pretty lonely place. If all your working interactions are with your suppliers or your contractors, and never with your fellow employees, it might be an unpleasant kind of social environment. Another obvious potential problem is financial security. Many of us get a sense of financial security from being employed by a stable large organization, even though that security is much less than it used to be. Another potential problem is where do you go for learning? For sharing war stories? For having your own reputation established and your credentials verified? Where do you go for a sense of community, a sense of identity that many of us get from the organizations we work for?

Recognizing these problems led to what was, in some ways, an obvious next step: Perhaps there should be a place in the world for a different kind of organization. Not a task-oriented, task-managing organization, but instead a community-oriented, people-managing organization. In other words, an organization that provides a home for individuals of the sort that our large companies do today, but in a way that is orthogonal to the temporary task networks that actually do the work. One of the words we've ended up using for these organizations is the word guild. By analogy to the guilds in the Middle Ages. These guilds could arise from a variety of sources today, from professional societies, from college alumni associations, from unions, from families, from neighborhoods, from churches.

Or even a new kind of temporary employment agency that took more of an interest in its people.

Yes, that's another good analogy. Manpower, Inc. is the largest private employer in the US today. I don't think this is the end of the answer, but it's at least a direction toward something that I think helps solve some of the problem.

Remember that people who aren't entrepreneurs might well want to associate themselves with people who do want to be entrepreneurial, who will look for entrepreneurial opportunities for the people that they're working with.

Managers and entrepreneurs do that today. I think that entrepreneurial activity will become more widely spread through the population, because the technology makes it more economically effective and possible. You could argue that we've gone way too far in the other direction in the large hierarchical organizations of today, that there are a lot of people who are capable of being more entrepreneurial than such organizations give them a chance to be. *



1. T.W. Malone, J. Yates, and R.I. Benjamin, "Electronic Markets and Electronic Hierarchies," Comm. ACM, Vol. 30, 1987, pp. 484­497.

2. T.W. Malone et al., "Enterprise: A Market-Like Task Scheduler for Distributed Computing Environments," The Ecology of Computation, B. A. Huberman, ed., North Holland, Amsterdam, 1988.

3. J.F. Shoch and J.A. Hupp, "The WORM Programs--Early Experience with a Distributed Computation," Comm. ACM, Vol. 25, No. 3, Mar. 1982.

4. R.G. Smith and R. Davis, "Frameworks for Cooperation in Distributed Problem Solving," IEEE Trans. Systems, Man, and Cybernetics, Vol. 11, No. 1, Jan. 1981.

5. C.A. Waldspurger, et al., "Spawn: A Distributed Computational Economy," IEEE Trans. Software Eng., Vol. 18, No. 2, Feb. 1992, pp. 103­117.

6. M. Resnick, Turtles, Termites, and Traffic Jams: Explorations in Massively Parallel Microworlds, Complex Adaptive Systems series, MIT Press, Cambridge, Mass.,1997.

7. M. Resnick, "Beyond the Centralized Mindset," J. Learning Sciences, Vol. 5, No. 1, 1996, pp. 1­22 (

8. E. Brynjolfsso et al., "Does Information Technology Lead to Smaller Firms?" Management Science, Vol. 40, No. 12, 1994, pp. 1,628­1,644.

9. J. Grudin, "Why CSCW Applications Fail: Problems in the Design and Evaluation of Organizational Interfaces," Proc. Second Conf. Computer-Supported Cooperative Work, D. Tatar, ed., ACM Press, New York, 1988, pp. 85­93.

10. J. Grudin and L. Palen, "Why Groupware Succeeds: Discretion or Mandate?" Proc. ECSCW'95, Kluwer, Dordrecht, The Netherlands, pp. 263­278.

11. T.W. Malone et al., "Intelligent Information Sharing Systems," Comm. ACM, Vol. 30, 1987, pp. 390­402.

12. T.W. Malone et al., "Semi-Structured Messages Are Surprisingly Useful for Computer-Supported Coordination," ACM Trans. Office Information Systems, Vol. 5, 1987, pp. 115­131.

13. T.W. Malone, K.Y. Lai, and C. Fry, "Experiments with Oval: A Radically Tailorable Tool for Cooperative Work," ACM Trans. Information Systems, Vol. 13, No. 2, Apr. 1995, pp. 177­205 (

14. T.W. Malone et al., "Tools for Inventing Organizations: Toward a Handbook of Organizational Processes," Tech. Report CCS WP No. 198, Feb. 1997 (

15. T.W. Malone, "Is 'Empowerment' Just a Fad? Control, Decision-Making, and Information Technology," Sloan Management Rev., Vol. 38, No. 2, 1997, pp. 23­35.

16. T.W. Malone, "Computer Support for Organizations: Toward an Organizational Science," Interfacing Thought: Cognitive Aspects of Human-Computer Interaction, J.M. Carroll, ed., MIT Press, Cambridge, Mass., 1987.

17. T.W. Malone, "Designing Organizational Interfaces," Proc. CHI '85 Conf. Human Factors in Computing Systems, ACM/SIGCHI, San Francisco, Calif., Apr. 14­18, 1985.

18. T.W. Malone and J.F. Rockart, "Computers, Networks, and the Corporation," Scientific American, Vol. 265, No. 3, Sept. 1991, pp. 128­136.

19. R.J. Laubacher, T.W. Malone, and the MIT Scenario Working Group, "Two Scenarios for 21st Century Organizations: Shifting Networks of Small Firms or All-Encompassing 'Virtual Countries'?" MIT Initiative on Inventing the Organizations of the 21st Century, Working Paper 21C WP #001, Jan. 1997 (


Thomas W. Malone is the Patrick J. McGovern Professor of Information Systems at the MIT Sloan School of Management. He is also the founder and director of the MIT Center for Coordination Science and one of two founding codirectors of a new MIT research initiative on "Inventing the Organizations of the 21st Century." Malone's research focuses on how computer and communications technology can help people work together in groups and organizations, and how organizations can be designed to take advantage of the new capabilities provided by information technology. Malone is well known for having led the team at MIT that developed the Information Lens system, a pioneering groupware tool in which intelligent agents help users find, filter, and sort large volumes of electronic information. Professor Malone has been a cofounder of three software companies. Before joining the MIT faculty, Malone was a research scientist at the Xerox Palo Alto Research Center (PARC) where his research involved designing educational software and office information systems. His background includes a PhD from Stanford University and degrees in applied mathematics, engineering, and psychology.

Readers may contact Malone at E53-333, Massachusetts Institute of Technology, Cambridge, MA 02139,

Project Coordination Resources on the Web