1541-4922/04/$31.00 © 2004 IEEE
Published by the IEEE Computer Society
On Distributed Systems and Social Engineering
With the advent of Web Services, peer-to-peer networking, grid computing, and even the Semantic Web, you might be forgiven for thinking that we had reached the holy grails of distributed computing. Or as some commentators would have it, perhaps we now have the tools to create the next global computing catastrophe (think multimillion-dollar business process frozen by unforeseen deadlocks, virus attacks on nuclear power plants, 1
or unexpected emergent buying behavior from automated stock market traders 2
Previous contributors to this department have highlighted the importance of such elements as semantics and trust for building these new generations of distributed systems safely. Another important element has been less emphasized, however—our overall approach toward modeling distributed applications.
New tools let us build applications that are distributed not only across many machines but also across organizational and social boundaries. Furthermore, these applications allow lightning-fast deployment worldwide (think of your favorite peer-to-peer consumer service or your least favorite virus) or even allow different systems to interact (such as the recently entertaining MyDoom.O virus variant, which used Web search engines such as Google to discover the email addresses of potential new victims 3
). The challenges of designing such applications go beyond those of standard distributed systems design because they involve not only process control and protocol issues but also
• System elements that are necessarily black boxes belonging to different organizations or individuals
• Actors with competitive rather than cooperative goals
• Legal issues of responsibility
• Exposure to arbitrary open environments
The consequences could be significant. If the incentives are there, parts of an application that belong to a third party might not just fail—they might try to actively subvert the process.
When distributed processes take on responsibilities
There is no silver bullet for these increasing risks. What appears certain, however, is that we will increasingly need to account for the responsibilities of the individual processes in an application—particularly when they belong to distinct organizations. Two paths seem likely:
• For highly mission-critical applications with known players, the explicit assignments of roles, obligations, quality of service, rights, and so forth will be designed in from the beginning.
• For more open environments, the application environment will increasingly need to be treated as an artificial social system. Social laws, system policies, sanctions, and other social machinery will be put in place to try to ensure that all participants not only can behave according to plan but also have the correct incentives to do so.
Either way, the humble distributed process will have to grow up—from a cog in a well-oiled machine to a standard-bearer for its organization, with obligations to deliver and potential consequences if it doesn't.
Relevant research areas include not only work on Byzantine failures from the distributed systems community but also norms, social laws, and other social structures; 4
agent-oriented engineering methodologies such as Gaia; 5
and economic-mechanism design. 6
Each of these areas is concerned with designing systems in which carefully managed incentives motivate all participants to cooperate toward common goals.
Although this research is bearing fruit, it is often disconnected from the large body of existing distributed systems research. Furthermore, it remains largely abstract. In the meanwhile, we might be racing against time to understand the kind of systems that people are actually building.
The most obvious way to capture responsibilities in human societies is through the concept of a contract—binding two or more parties to courses of action to achieve a purpose.
While human contracts are often highly complex, languages such as Eiffel ( www.eiffel.com), and the general concept of software by contract, have already shown that simplified notions of contractual relations can be powerful constructs in system building. Although Eiffel's contracts are generally lower level (such as specifying assertions on correct transformation of parameter inputs to outputs by individual functions), you can use similar structures to codify relationships between Web Services, grid components, or other distributed components. Work in this direction is already underway in many quarters; examples include
• IBM's Web Service Level Agreement language 7
• Policy languages such as Ponder ( www-dse.doc.ic.ac.uk/Research/policies) and KaOS (Knowledgeable Agent-Oriented System, www.ihmc.us/research/projects/KAoS)
From a system point of view, this approach implies some interesting architectural constructs—contracts are useless unless underpinned by an agreed legal framework, mechanisms for checking identity, notaries, witnesses, and a range of other services. It will be interesting to see how the Internet, grids, corporate intranets, and prevailing standards will incorporate such mechanisms.
In any case, whether the breakthroughs come through Eiffel vX, WSDL++, or some other form remains to be seen. The next time your software component asks you to sign on the dotted line, check what you're signing.
The next Big Thing after that?
Even if the previous prediction turns out to be wrong (no warranty provided), two things seem certain:
• The issues inherent in designing large-scale distributed applications change significantly in a cross-organizational context and further still in open environments.
• Many of these issues are social rather than computational.
This suggests that at some point a choice will have to be made: Will these artificial social systems remain inherently distinct and different from human society, or will they use the same legal, ethical, and social structures that we humans use for our own interactions?
The famous Turing test to determine whether a system is truly intelligent revolves around a set of scientists analyzing a candidate system according to a set of orchestrated rules. 8
However, maybe the question should be set in an entirely different context: By 2010, will you be able to tell the artificial system from the thousands of (supposedly) human users or systems you interact with online?
is a research scientist in the Languages and Systems Department at the Universitat Politècnica de Catalunya. Contact him at firstname.lastname@example.org.