The Community for Technology Leaders

Guest Editor's Introduction: Creating Robust Software through Self-Adaptation

Robert Laddaga, Massachusetts Institute of Technology

Pages: pp. 26-29


Over the past several years, interest has grown considerably in new techniques and technology for improving the task of creating and maintaining high-quality software. These efforts have arisen in response to a growing sense among application developers that traditional approaches are inadequate. Such new methods for improving software efficiency and predictability include intentional programming, evolutionary programming, model-based programming, and self-adaptive software—the last a novel approach sponsored by the Information Technology Office of the US Defense Advanced Research Projects Agency.

Software creation, lifetime management, and quality have always been a nearly intractable set of engineering problems. Practitioners have approached these problems with a specific set of engineering techniques, specialized to the software domain: problem and tool abstraction, modularity, testing, and standards, among others. Examples of tool abstraction include high-level languages, operating systems, and database systems; examples of modularity include structured and object-oriented programming. Despite these efforts, and despite significant improvements in software tools and technology, software is still hard to produce, hard to support, and generally of significantly lower quality than we would like.

These more traditional approaches have not been worthless in improving our ability to produce better code more affordably. Rather, the problem has been that our reach always exceeds our grasp. As hardware capabilities improve and our understanding of how to apply computation to problems improves, we continually try to solve more difficult problems, driving up the complexity of solutions and overrunning the ability of our tools to manage the complexity. Needed now is a fresh approach that lets us build software in a new way and that offers better methods for significantly enhancing robustness. The authors included in this special issue believe that self-adaptive software is a promising new approach (or family of approaches) that responds appropriately to these new requirements.

SELF-ADAPTIVE SOFTWARE

According to the DARPA Broad Agency Announcement on Self-Adaptive Software (BAA-98-12, December 1997; see www.darpa.mil/ito/Solicitations/PIP_9812.html):

Self-adaptive software evaluates its own behavior and changes behavior when the evaluation indicates that it is not accomplishing what the software is intended to do, or when better functionality or performance is possible....This implies that the software has multiple ways of accomplishing its purpose and has enough knowledge of its construction to make effective changes at runtime. Such software should include functionality for evaluating its behavior and performance, as well as the ability to replan and reconfigure its operations to improve its operation. Self-adaptive software should also include a set of components for each major function, along with descriptions of the components, so that system components can be selected and scheduled at runtime, in response to the evaluators. It must also be able to impedance match input/output of sequenced components and generate some of this code from specifications. In addition, DARPA seeks this new basis of adaptation to be applied at runtime, as opposed to development/design time, or as a maintenance activity.

The key aspects of this definition are that code behavior is evaluated or tested at runtime, that a negative test result leads to a runtime change in behavior, and that the runtime code includes the following items not currently included in shipped software:

  • descriptions of software intentions (goals and designs) and of program structure and
  • a collection of alternative implementations and algorithms (sometimes called a reuse asset base).

Those of us working to shape and define this area have largely been informed by two metaphors: coding an application either as a dynamic planning system or as a control system. In the first metaphor, we imagine that the application doesn't simply execute a specific set of algorithms, but instead plans its actions. Such a plan is available for inspection, evaluation, and modification. Replanning can occur at runtime in response to a negative evaluation of the plan's effectiveness or execution. The plan would treat computational resources such as hardware, communication capacity, and code objects (components) as resources that the plan can schedule and configure.

In the control-system metaphor, the runtime software is treated like a factory, with inputs and outputs, and a monitoring and control facility that manages the factory. Evaluation, measurement, and control systems are layered on top of the application and manage system reconfiguration. This regulated behavior derives from explicit models of the application's operation, purpose, and structure. It is significantly more complex than standard control systems, because the effects of small changes are highly variable and because the filtering and diagnosis of results before they can be treated as feedback or feed-forward mechanisms are also very complex. Despite the difficulties of applying control theory to such highly nonlinear systems, there appears to be a very valuable set of insights to be exploited from control theory, including, for example, the concept of stability.

The " Articles" sidebar shows the interplay between the dynamic planning and control motifs in current self-adaptive software work.

PROBLEMS, FUTURE ISSUES

We don't expect self-adaptive software technology to develop immediately. Indeed, difficult problems remain. The first is runtime performance. Evaluating outcomes of computations and determining if expectations are being met takes time. Expectations will be met in most cases, and the checking will seem to be purely overhead in those cases. On the other hand, comprehensively evaluating what algorithms and implementations to use is an advantage if it lets us select the optimal or near-optimal algorithm for the input and state context we have at runtime, rather than making a preselected design-time compromise. Additionally, hardware performance keeps increasing, but perceived software robustness does not. Even so, we will need to finds ways to eliminate unneeded evaluation cycles and possibly develop hierarchical systems with escalating amounts of evaluation needed for particular problems. That will clearly take effort.

Software creation effort is a second performance problem. The kind of programming that delivers code capable of evaluating and reconfiguring itself is difficult to build by hand. Although such coding can be accomplished in principle by having programmers and system designers write additional code (over and above the code needed for straight functionality), we would prefer automated techniques. Specifically, we think that approaches that stress automated generation of evaluators from specifications of program requirements and detailed design are likely to significantly reduce the additional burden of producing evaluators.

Advances in computer hardware provide both opportunities and problems. The possibility of rapidly reconfigurable hardware, in the form of field-programmable gate arrays, is one opportunity. DARPA's Adaptive Computing Systems program is investigating how to build and program larger, denser, and faster FPGAs. The program's researchers are considering how to switch hardware programming as a computation's phases or modes change. Self-adaptive software supports runtime management of software components, including scheduling alternative components and managing I/O data mismatches. The runtime support also exploits opportunities identified by evaluators and replanning operations to restore and improve functionality or performance. Clearly, such reconfigurable software could provide a useful top-level application-programming level for reconfigurable hardware. In such a system, the hardware configuration would simply be another resource to be scheduled and planned.

Under hardware problems, DARPA's Data Intensive program is addressing the increasing mismatch between processor speed and memory-access speeds. One approach involves moving portions of processing to memory modules, to reduce the requirement to move all data to the main microprocessor. Self-adaptive and reconfigurable software can play a role in dynamically determining the distribution of components in such highly parallel architectures.

Evaluation presents one of the hardest problems for self-adaptive software. It is not always intuitively clear how to evaluate functionality and performance at runtime. It might help here to consider three classes of application:

  • constructive,
  • analytic with ground truth available, and
  • analytic without ground truth.

Consider a robot attempting to reach a specified goal, around a series of partially mapped obstructions. A constructive task would be to make a plan to reach the goal. Clearly, we can evaluate the plan statically based on its merits, in comparison to constraints and goals known beforehand. The harder case is the task of actually moving closer to the goal. If obstructions prevent us from making progress, we can still evaluate performance if we know our position relative to the goal, for example, by having GPS or inertial guidance sensors on board. This is an example of an analytic problem with ground truth (our position relative to the goal) known.

A third type of problem is like the second, except that here we have evidence about our position, but not practically certain knowledge. In this case, we have an analytic problem without ground truth available. In this third and hardest class of applications, we can still use weight of evidence and probabilistic reasoning to evaluate performance at runtime. The Robertson and Musliner articles present examples of this.

The foregoing example makes it clear that there is great variability in the ease with which evaluation can be accomplished, and that even in the hardest case, it is still possible to evaluate. However, there is still much work to be done in determining what classes of application require what forms of evaluation, which tools will provide better evaluation capability, and to what extent such tools will need to be specific to particular application domains.

The lack of adequate metrics for degree of robustness and adaptation is another weakness of current self-adaptive software research. It is difficult to determine the effectiveness of self-adaptive software or its degree of adaptiveness, or even how many dimensions are required to measure robustness or adaptiveness.

Our notions of reconfigurability and self-knowledge raise interesting questions about the unit of modularity and where the structural and requirements knowledge should be held. We need to determine whether we should be able to evaluate individual lines of code and functions or only larger-scale modules. We must also determine where the knowledge used to inform evaluation should be held—centrally, at the module, or in individual functions or lines of code. Although it is not currently clear that there is a right answer to these questions, let alone what that right answer might be, there is an interesting implication in the notion that the knowledge and self-evaluation should be at the level of modules. Modules with self-knowledge and self-evaluation capability are one way to characterize intelligent, intentional agents. In fact, several of the articles we have gathered organize their self-adaptive programs around intentional agents. It might be that there is a useful confluence of ideas between intentional agents on the one hand and self-adaptive software on the other.

CONCLUSION

Software design is largely the task of analyzing the cases that a system will be presented with and ensuring that the software meets its requirements for those cases. In practice, providing good coverage of cases is difficult and ensuring complete coverage is impossible. Furthermore, because program behaviors are determined in advance, the exact operating conditions and inputs are not used in deciding what the software will do. The state of the software art is to adapt to new operating conditions "off line"—through the efforts of designers, coders, and maintainers. The requirement for human intervention means that needed change is delayed. The premise of self-adaptive software is that the need for change should be detected, and the required change effected, while the program is running (at runtime).

Self-adaptive software's goal is the creation of technology to enable programs to understand, monitor, and modify themselves. Self-adaptive software understands what it does, how it does it, how to evaluate its own performance, and thus how to respond to changing conditions. I believe that self-adaptive software will identify, promote, and evaluate new models of code design and runtime support. These new models will let software modify its own behavior to adapt, at runtime—when exact conditions and inputs are known—to discovered changes in requirements, inputs, and internal and external conditions.

Self-adaptive software will provide swifter response, improved performance, and ease of update. Programs with self-knowledge, which use that knowledge to adapt to changing circumstances, are also an interesting avenue of exploration in AI. It is surprising that in an area like AI, which has provided so many novel tools for software development, so little has been done to apply AI techniques such as planning, probabilistic reasoning, and knowledge representation to the problem of producing and managing software applications.

Finally, many interesting tasks lay ahead in the development of technology for self-adaptive software. Undoubtedly, we'll learn several interesting lessons along the way.

Articles

All six articles presented in this theme use this control metaphor to some degree. The article that most closely matches this view is "Control Theory-Based Foundations of Self-Controlling Software," by Northwestern's Mieczyslaw M. Kokar, Kenneth Baclawski, and Yonet A. Eracar. That article clearly expresses the software runtime "plant" view and discusses a number of useful considerations from control theory—especially adaptive control theory. In particular, this article concentrates on a class of applications whose outputs depend on state and are thus more limited in output range as a function of time. These applications provide a better basis for this software-plant approach.

For similar reasons of state and relative inherent stability, the work at Honeywell has concentrated on the area of self-adaptive control of control-system applications. The article "SA-Circa: Self-Adaptive Software for Hard Real-Time Environments," by David Musliner, Robert Goldman, Michael Pelican, and Kurt Krebsbach (appearing next issue), describes self-adaptive refinements to the Circa hard real-time control system. Circa can generate plans that produce hard real-time discrete event controllers. By incorporating runtime evaluation and runtime plan generation or modification, SA-Circa can adapt at runtime to unforeseen circumstances. In addition to utilizing the control metaphor, the Honeywell system also employs the dynamic-planning metaphor, demonstrating that the two concepts are synergistic, rather than competitive.

A second characteristic that all six articles share is a self-modeling approach. Models of the software's operation contained in the running software drive both evaluation and reconfiguration. Essentially, the applications are built to contain knowledge of their operation; they use that knowledge to evaluate performance, reconfigure, and adapt to changing circumstances. This self-modeling approach is very clear in the Vanderbilt article, "ModelIntegrated Computing and Self-Adaptive Software," by Gabor Karsai and Janos Sztipanovits. The model-integrated computing approach to the model-based programming paradigm uses formal behavioral models for a system to generate model interpreters that execute by interpreting the system's model. In considering how to make such a system adaptive, it seemed clear that by moving to the metalevel and modeling the software system itself, a system developer could then evaluate the behavior of the software system and reconfigure the system at runtime.

This representation of self-knowledge contained in the program and operated on by the program makes self-adaptive software not only a novel approach to software engineering, but also a reemergence of software creation as an area of interest for the AI community. The Oxford article, "Adaptive Image Analysis for Aerial Surveillance," by Paul Robertson and J. Michael Brady, demonstrates this point most clearly. In this article, the traditional AI area of vision and image analysis serves as the application focus of a self-adaptive software system. In the Oxford work, the concept of reflection serves to formalize the notion of embedded self-knowledge. This concept is used to drive a control-system approach to the application of filters to an image for segmenting and labeling regions of the image.

There remain two additional concepts exemplified by all of the articles: software architecture and dynamism. Self-adaptive software is software that adapts and reconfigures at runtime. This ability to handle at runtime aspects of software creation that are normally reserved for design time is essentially dynamic. Reconfiguration causes us to pay close attention to modularity and the unit that is configurable or replaceable. The UC Irvine article, "An Architecture-Based Approach to Self-Adaptive Software," by Peyman Oreizy and his colleagues, covers both of these issues in a very straightforward manner. Given a software architecture capable of making component and connector changes at runtime, a self-adaptive system can be built by adding descriptions of the architecture and evaluating performance at runtime.

One of the articles, "Gesture-Based Programming for Robotics: Human-Augmented Software Adaptation," by Richard M. Voyles, J. Dan Morrow, and Pradeep K. Khosla, breaks new ground in self-adaptive software by considering the implications of self-adaptive software for human-computer interaction. The article describes how self-adaptive software can be used to assist a robot in interpreting human gestures used to provide programming guidance on task performance to the robot. Although the article mainly describes how to improve a robot's ability to accept human input, it is clear that similar ideas can address improving the quality of output information to humans, and to human-computer dialog in general.

About the Authors

Bio Graphic
Robert Laddaga is a research scientist at MIT's AI Lab and, until recently, a program manager at the Information Technology Office of the US Defense Advanced Research Projects Agency. His research interests include intelligent systems and software, software-development tools, and semantically based collaboration. He has a PhD in philosophy from Stanford University and an MA in philosophy and a BS in mathematics from the University of South Carolina. Contact him at the MIT AI Lab, Massachusetts Inst. of Technology, Cambridge, MA 02139-4307; rladdaga@ai.mit.edu.
FULL ARTICLE
65 ms
(Ver 3.x)