The Community for Technology Leaders
Green Image
Issue No. 02 - February (2006 vol. 39)
ISSN: 0018-9162
pp: 25-31
Douglas C. Schmidt , Vanderbilt University

Over the past five decades, software researchers and developers have been creating abstractions that help them program in terms of their design intent rather than the underlying computing environment—for example, CPU, memory, and network devices—and shield them from the complexities of these environments.
From the early days of computing, these abstractions included both language and platform technologies. For example, early programming languages, such as assembly and Fortran, shielded developers from complexities of programming with machine code. Likewise, early operating system platforms, such as OS/360 and Unix, shielded developers from complexities of programming directly to hardware.
Although these early languages and platforms raised the level of abstraction, they still had a distinct "computing-oriented" focus. In particular, they provided abstractions of the solution space—that is, the domain of computing technologies themselves—rather than abstractions of the problem space that express designs in terms of concepts in application domains, such as telecom, aerospace, healthcare, insurance, and biology.
Lessons from computer-aided Software engineering
Various past efforts have created technologies that further elevated the level of abstraction used to develop software.
One prominent effort begun in the 1980s was computer-aided software engineering (CASE), which focused on developing software methods and tools that enabled developers to express their designs in terms of general-purpose graphical programming representations, such as state machines, structure diagrams, and dataflow diagrams. One goal of CASE was to enable more thorough analysis of graphical programs that incur less complexity than conventional general-purpose programming languages—for example, by avoiding memory corruption and leaks associated with languages like C. Another goal was to synthesize implementation artifacts from graphical representations to reduce the effort of manually coding, debugging, and porting programs.
Although CASE attracted considerable attention in the research and trade literature, it wasn't widely adopted in practice. One problem it faced was that the general-purpose graphical language representations for writing programs in CASE tools mapped poorly onto the underlying platforms, which were largely single-node operating systems—such as DOS, OS/2, or Windows—that lacked support for important quality-of-service (QoS) properties, such as transparent distribution, fault tolerance, and security. The amount and complexity of generated code needed to compensate for the paucity of the underlying platforms was beyond the grasp of translation technologies available at the time, which made it hard to develop, debug, and evolve CASE tools and applications created with these tools.
Another problem with CASE was its inability to scale to handle complex, production-scale systems in a broad range of application domains. In general, CASE tools did not support concurrent engineering, so they were limited to programs written by a single person or by a team that serialized their access to files used by these tools. Moreover, due to a lack of powerful common middleware platforms, CASE tools targeted proprietary execution environments, which made it hard to integrate the code they generated with other software language and platform technologies. CASE tools also didn't support many application domains effectively because their "one-size-fits-all" graphical representations were too generic and noncustomizable.
As a result, CASE had relatively little impact on commercial software development during the 1980s and 1990s, focusing primarily on a few domains, such as telecom call processing, that mapped nicely onto state machine representations. To the extent that CASE tools were applied in practice, they were limited largely to a subset of tools that enabled designers to draw diagrams of software architectures and document design decisions, which programmers then used to help guide the creation and evolution of their handcrafted implementations. Since there was no direct relationship between the diagrams and the implementations, however, developers tended not to put much stock in the accuracy of the diagrams since they were rarely in sync with the code during later stages of projects.
Current platform and language limitations
Advances in languages and platforms during the past two decades have raised the level of software abstractions available to developers, thereby alleviating one impediment to earlier CASE efforts. For example, developers today typically use more expressive object-oriented languages, such as C++, Java, or C#, rather than Fortran or C. Likewise, today's reusable class libraries and application framework platforms minimize the need to reinvent common and domain-specific middleware services, such as transactions, discovery, fault tolerance, event notification, security, and distributed resource management. Due to the maturation of third-generation languages and reusable platforms, therefore, software developers are now better equipped to shield themselves from complexities associated with creating applications using earlier technologies.
Despite these advances, several vexing problems remain. At the heart of these problems is the growth of platform complexity, which has evolved faster than the ability of general-purpose languages to mask it. For example, popular middleware platforms, such as J2EE, .NET, and CORBA, contain thousands of classes and methods with many intricate dependencies and subtle side effects that require considerable effort to program and tune properly. Moreover, since these platforms often evolve rapidly—and new platforms appear regularly—developers expend considerable effort manually porting application code to different platforms or newer versions of the same platform.
A related problem is that most application and platform code is still written and maintained manually using third-generation languages, which incurs excessive time and effort—particularly for key integration-related activities, such as system deployment, configuration, and quality assurance. For example, it is hard to write Java or C# code that correctly and optimally deploys large-scale distributed systems with hundreds or thousands of interconnected software components. Even using newer notations, such as XML-based deployment descriptors popular with component and service-oriented architecture middleware platforms, is fraught with complexity. Much of this complexity stems from the semantic gap between the design intent—for example, "deploy components 1-50 onto nodes A-G and components 51-100 onto nodes H-N in accordance with system resource requirements and availability"—and the expression of this intent in thousands of lines of handcrafted XML whose visually dense syntax conveys neither domain semantics nor design intent.
Due to these types of problems, the software industry is reaching a complexity ceiling where next-generation platform technologies, such as Web services and product-line architectures, have become so complex that developers spend years mastering—and wrestling with—platform APIs and usage patterns, and are often familiar with only a subset of the platforms they use regularly. Moreover, third-generation languages require developers to pay such close attention to numerous tactical imperative programming details that they often can't focus on strategic architectural issues such as system-wide correctness and performance.
These fragmented views make it hard for developers to know which portions of their applications are susceptible to side effects arising from changes to user requirements and language/platform environments. The lack of an integrated view—coupled with the danger of unforeseen side effects—often forces developers to implement suboptimal solutions that unnecessarily duplicate code, violate key architectural principles, and complicate system evolution and quality assurance.
Model-Driven Engineering
A promising approach to address platform complexity—and the inability of third-generation languages to alleviate this complexity and express domain concepts effectively—is to develop Model-Driven Engineering (MDE) technologies that combine the following:

    Domain-specific modeling languages whose type systems formalize the application structure, behavior, and requirements within particular domains, such as software-defined radios, avionics mission computing, online financial services, warehouse management, or even the domain of middleware platforms. DSMLs are described using metamodels, which define the relationships among concepts in a domain and precisely specify the key semantics and constraints associated with these domain concepts. Developers use DSMLs to build applications using elements of the type system captured by metamodels and express design intent declaratively rather than imperatively.

    Transformation engines and generators that analyze certain aspects of models and then synthesize various types of artifacts, such as source code, simulation inputs, XML deployment descriptions, or alternative model representations. The ability to synthesize artifacts from models helps ensure the consistency between application implementations and analysis information associated with functional and QoS requirements captured by models. This automated transformation process is often referred to as "correct-by-construction," as opposed to conventional handcrafted "construct-by-correction" software development processes that are tedious and error prone.

Existing and emerging MDE technologies apply lessons learned from earlier efforts at developing higher-level platform and language abstractions. For example, instead of general-purpose notations that rarely express application domain concepts and design intent, DSMLs can be tailored via metamodeling to precisely match the domain's semantics and syntax. Having graphic elements that relate directly to a familiar domain not only helps flatten learning curves but also helps a broader range of subject matter experts, such as system engineers and experienced software architects, ensure that software systems meet user needs.
Moreover, MDE tools impose domain-specific constraints and perform model checking that can detect and prevent many errors early in the life cycle. In addition, since today's platforms have much richer functionality and QoS than those in the 1980s and 1990s, MDE tool generators need not be as complicated since they can synthesize artifacts that map onto higher-level, often standardized, middleware platform APIs and frameworks, rather than lower-level OS APIs. As a result, it's often much easier to develop, debug, and evolve MDE tools and applications created with these tools.
In this issue
This special issue of Computer contains four articles that describe the results from recent R&D efforts that represent the new generation of MDE tools and environments.
Two of these articles focus on the pressing need for creating languages that help reduce the complexity of developing and using modern platforms. "Developing Applications Using Model-Driven Design Environments" by Krishnakumar Balasubramanian and colleagues describes several DSMLs that simplify and automate many activities associated with developing, optimizing, deploying, and verifying component-based distributed real-time and embedded systems. "CALM and Cadena: Metamodeling for Component-Based Product-Line Development" by Adam Childs and colleagues presents an MDE framework that uses extended type systems to capture component-based software product-line architectures and arrange those architectures into hierarchies to transform platform-independent models into platform-specific models.
When developers apply MDE tools to model large-scale systems containing thousands of elements, they must be able to examine various design alternatives quickly and evaluate the many diverse configuration possibilities available to them. "Automating Change Evolution in Model-Driven Engineering" by Jeff Gray and colleagues describes a model transformation engine for exploring and manipulating large models. The authors present a solution that considers issues of scalability in MDE (such as scaling a base model of a sensor network to thousands of sensors) and applies an aspect-oriented approach to modeling crosscutting concerns (such as a flight data recorder policy that spans multiple avionics components).
As MDE tools cross the chasm from early adopters to mainstream software developers, a key challenge is to define useful standards that enable tools and models to work together portably and effectively. In "Model-Driven Development Using UML 2.0: Promises and Pitfalls," Robert B. France and colleagues evaluate the pros and cons of UML 2.0 features in terms of their MDE support.
Another source of information on MDE standardization is available at the Object Management Group's Web site (, which describes the efforts of the Model-Integrated Computing Platform Special Interest Group that is standardizing the results of R&D efforts funded by government agencies, such as the Defense Advanced Research Projects Agency and the National Science Foundation.
An example of this transition from R&D to standards is the Open Tool Integration Framework, a metamodel-based approach to MDE tool integration that defines architectural components (such as tool adapters and semantic translators) and interaction protocols for forming integrated design tool chains. Other standards, such as Query/Views/Transformations and the MetaObject Facility being defined as part of the UML-based Model-Driven Architecture OMG standard can also be useful as the basis for domain-specific MDE tools.
Standards alone, however, are insufficient without solid infrastructure support for developing and evolving MDE tools and applications. The articles in this special issue describe the application of various MDE tools, such as Eclipse from IBM and the Generic Modeling Environment from the Institute for Software Integrated Systems, to a range of commercial and R&D projects. To explore commercial adoption in more depth, a pair of sidebars, " Model-Centric Software Development" by Daniel Waddington and Patrick Lardieri and " Domain-Specific Modeling Languages for Enterprise DRE System QoS" by John M. Slaby and Steven D. Baker, summarize experiences applying MDE tools to complex command-and-control and shipboard computing projects in large system integrator companies.
The lessons learned from these types of projects help mature the MDE tool infrastructure and harden it for adoption in mainstream commercial projects. Several emerging MDE tools that bear watching in the future are the Eclipse Graphical Modeling Framework, the DSL Toolkit in Microsoft's Visual Studio Team System, and openArchitectureWare available from SourceForge.
As the articles in this special issue show, recent advances stemming from years of R&D efforts around the world have enabled the successful application of MDE to meet many needs of complex software systems. To avoid problems with earlier CASE tools that integrated poorly with other technologies, these MDE efforts recognize that models alone are insufficient to develop complex systems.
These articles therefore describe how MDE leverages, augments, and integrates other technologies, such as patterns, model checkers, third-generation and aspect-oriented languages, application frameworks, component middleware platforms, and product-line architectures. In this broader context, models and MDE tools serve as a unifying vehicle to document, analyze, and transform information systematically at many phases throughout a system's life cycle, capturing various aspects of application structure, behavior, and QoS using general-purpose or domain-specific notations.
Although a great deal of publicity on model-driven topics has appeared in the trade press, it's surprisingly hard to find solid technical material on MDE technologies, applications of these technologies to complex production-scale systems, and frank assessments of MDE benefits and areas that still need attention. For example, further R&D is needed to support roundtrip concurrency engineering and synchronization between models and source code or other model representations, improve debugging at the modeling level, ensure backward compatibility of MDE tools, standardize metamodeling environments and model interchange formats, capture design intent for arbitrary applications domains, automate the specification and synthesis of model transformations and QoS properties to simplify the evolution of models and metamodels, and certify safety properties of models in DSMLs and in their generated artifacts.
Although MDE still faces some R&D challenges, decades of progress and commercialization have enabled us to reach the critical mass necessary to cross the chasm to mainstream software practitioners. The articles in this issue help replace hype with sound technical insights and lessons learned from experience with complex systems. We encourage you to get involved with the MDE community and contribute your experience in future conferences, journals, and other publication venues. A summary of upcoming events and venues for learning about and sharing information on the model-driven engineering of software systems appears at
I thank Frank Buschmann, Jack Greenfield, Kevlin Henney, Andrey Nechypurenko, Jeff Parsons, and Markus Völter for spirited discussions at the 2005 OOP Conference in Munich, Germany, that helped shape my thinking on the role of MDE in modern languages and platforms. I also thank Arvind S. Krishna for inspiring this special issue. Finally, thanks to the members of the MDE R&D community, who are spurring a crucial paradigm shift in generative software technologies.
Douglas C. Schmidt is associate chair of computer science and engineering and a professor of computer science at Vanderbilt University. Contact him at
167 ms
(Ver 3.3 (11022016))