SOFTWARE ENGINEERING PROCESS
CMMI: Capability Maturity Model Integration
EF: Experience Factory
FP: Function Point
HRM: Human Resources Management
IDEAL: Initiating-Diagnosing-Establishing-Acting-Leaning (model)
OMG: Object Management Group
QIP: Quality Improvement Paradigm
SCAMPI CMM: Based Appraisal for Process Improvement using the CMMI
SCE: Software Capability Evaluation
SEPG: Software Engineering Process Group
The Software Engineering Process KA can be examined on two levels. The first level encompasses the technical and managerial activities within the software life cycle processes that are performed during software acquisition, development, maintenance, and retirement. The second is the meta-level, which is concerned with the definition, implementation, assessment, measurement, management, change, and improvement of the software life cycle processes themselves. The first level is covered by the other KAs in the Guide. This KA is concerned with the second.
The term "software engineering process" can be interpreted in different ways, and this may cause confusion.
This KA applies to any part of the management of software life cycle processes where procedural or technological change is being introduced for process or product improvement.
Software engineering process is relevant not only to large organizations. On the contrary, process-related activities can, and have been, performed successfully by small organizations, teams, and individuals.
The objective of managing software life cycle processes is to implement new or better processes in actual practices, be they individual, project, or organizational.
This KA does not explicitly address human resources management (HRM), for example, as embodied in the People CMM (Cur02) and systems engineering processes [ISO1528-028; IEEE 1220-98].
It should also be recognized that many software engineering process issues are closely related to other disciplines, such as management, albeit sometimes using a different terminology.
Figure 1 shows the breakdown of topics in this KA.
Figure 1 Breakdown of topics for the Software Engineering Process KA
This subarea focuses on organizational change. It describes the infrastructure, activities, models, and practical considerations for process implementation and change.
Described here is the situation in which processes are deployed for the first time (for example, introducing an inspection process within a project or a method covering the complete life cycle), and where current processes are changed (for example, introducing a tool, or optimizing a procedure). This can also be termed process evolution. In both instances, existing practices have to be modified. If the modifications are extensive, then changes in the organizational culture may also be necessary.
This topic includes the knowledge related to the software engineering process infrastructure.
To establish software life cycle processes, it is necessary to have an appropriate infrastructure in place, meaning that the resources must be available (competent staff, tools, and funding) and the responsibilities assigned. When these tasks have been completed, it is an indication of management's commitment to, and ownership of, the software engineering process effort.
Various committees may have to be established, such as a steering committee to oversee the software engineering process effort. A description of an infrastructure for process improvement in general is provided in [McF96]. Two main types of infrastructure are used in practice: the Software Engineering Process Group and the Experience Factory.
The SEPG is intended to be the central focus of software engineering process improvement, and it has a number of responsibilities in terms of initiating and sustaining it. These are described in [Fow90].
The concept of the EF separates the project organization (the software development organization, for example) from the improvement organization. The project organization focuses on the development and maintenance of software, while the EF is concerned with software engineering process improvement.
The EF is intended to institutionalize the collective learning of an organization by developing, updating, and delivering to the project organization experience packages (for example, guides, models, and training courses), also referred to as process assets. The project organization offers the EF their products, the plans used in their development, and the data gathered during development and operation. Examples of experience packages are presented in [Bas92].
The management of software processes consists of four activities sequenced in an iterative cycle allowing for continuous feedback and improvement of the software process:
Two general models that have emerged for driving process implementation and change are the Quality Improvement Paradigm (QIP) [SEL96] and the IDEAL model [McF96]. The two paradigms are compared in [SEL96]. Evaluation of process implementation and change outcomes can be qualitative or quantitative.
Process implementation and change constitute an instance of organizational change. Most successful organizational change efforts treat the change as a project in its own right, with appropriate plans, monitoring, and review.
Guidelines about process implementation and change within software engineering organizations, including action planning, training, management sponsorship, commitment, and the selection of pilot projects, and which cover both processes and tools, are given in [Moi98; San98; Sti99]. Empirical studies on success factors for process change are reported in (ElE99a).
The role of change agents in this activity is discussed in (Hut94). Process implementation and change can also be seen as an instance of consulting (either internal or external).
One can also view organizational change from the perspective of technology transfer (Rog83). Software engineering articles which discuss technology transfer and the characteristics of recipients of new technology (which could include process-related technologies) are (Pfl99; Rag89).
There are two ways of approaching the evaluation of process implementation and change, either in terms of changes to the process itself or in terms of changes to the process outcomes (for example, measuring the return on investment from making the change). A pragmatic look at what can be achieved from such evaluation studies is given in (Her98).
Overviews of how to evaluate process implementation and change, and examples of studies that do so, can be found in [Gol99], (Kit98; Kra99; McG94).
A process definition can be a procedure, a policy, or a standard. Software life cycle processes are defined for a number of reasons, including increasing the quality of the product, facilitating human understanding and communication, supporting process improvement, supporting process management, providing automated process guidance, and providing automated execution support. The types of process definitions required will depend, at least partially, on the reason for the definition.
It should also be noted that the context of the project and organization will determine the type of process definition that is most useful. Important variables to consider include the nature of the work (for example, maintenance or development), the application domain, the life cycle model, and the maturity of the organization.
Software life cycle models serve as a high-level definition of the phases that occur during development. They are not aimed at providing detailed definitions but at highlighting the key activities and their interdependencies. Examples of software life cycle models are the waterfall model, the throwaway prototyping model, evolutionary development, incremental/iterative delivery, the spiral model, the reusable software model, and automated software synthesis. Comparisons of these models are provided in [Com97], (Dav88), and a method for selecting among many of them in (Ale91).
Definitions of software life cycle processes tend to be more detailed than software life cycle models. However, software life cycle processes do not attempt to order their processes in time. This means that, in principle, the software life cycle processes can be arranged to fit any of the software life cycle models. The main reference in this area is IEEE/EIA 12207.0: Information Technology — Software Life Cycle Processes [IEEE 12207.0-96].
The IEEE 1074:1997 standard on developing life cycle processes also provides a list of processes and activities for software development and maintenance [IEEE1074-97], as well as a list of life cycle activities which can be mapped into processes and organized in the same way as any of the software life cycle models. In addition, it identifies and links other IEEE software standards to these activities. In principle, IEEE Std 1074 can be used to build processes conforming to any of the life cycle models. Standards which focus on maintenance processes are IEEE Std 1219-1998 and ISO 14764: 1998 [IEEE 1219-98].
Other important standards providing process definitionsinclude
In some situations, software engineering processes must be defined taking into account the organizational processes for quality management. ISO 9001 [ISO9001-00] provides requirements for quality management processes, and ISO/IEC 90003 interprets those requirements for organizations developing software (ISO90003-04).
Some software life cycle processes emphasize rapid delivery and strong user participation, namely agile methods such as Extreme Programming [Bec99]. A form of the selection problem concerns the choice along the agile plan-driven method axis. A risk-based approach to making that decision is described in (Boe03a).
Processes can be defined at different levels of abstraction (for example, generic definitions vs. adapted definitions, descriptive vs. prescriptive vs. proscriptive) [Pfl01]. Various elements of a process can be defined, for example, activities, products (artifacts), and resources. Detailed frameworks which structure the types of information required to define processes are described in (Mad94).
There are a number of notations being used to define processes (SPC92). A key difference between them is in the type of information the frameworks mentioned above define, capture, and use. The software engineer should be aware of the following approaches: data flow diagrams, in terms of process purpose and outcomes [ISO15504-98], as a list of processes decomposed into constituent activities and tasks defined in natural language [IEEE12207.0-96], Statecharts (Har98), ETVX (Rad85), Actor-Dependency modeling (Yu94), SADT notation (Mcg93), Petri nets (Ban95); IDEF0 (IEEE 1320.1-98), and rule-based (Bar95). More recently, a process modeling standard has been published by the OMG which is intended to harmonize modeling notations. This is termed the SPEM (Software Process Engineering Meta-Model) specification. [OMG02]
It is important to note that predefined processes—even standardized ones—must be adapted to local needs, for example, organizational context, project size, regulatory requirements, industry practices, and corporate cultures. Some standards, such as IEEE/EIA 12207, contain mechanisms and recommendations for accomplishing the adaptation.
Automated tools either support the execution of the process definitions or they provide guidance to humans performing the defined processes. In cases where process analysis is performed, some tools allow different types of simulations (for example, discrete event simulation).
In addition, there are tools which support each of the above process definition notations. Moreover, these tools can execute the process definitions to provide automated support to the actual processes, or to fully automate them in some instances. An overview of process-modeling tools can be found in [Fin94] and of process-centered environments in (Gar96). Work on applying the Internet to the provision of real-time process guidance is described in (Kel98).
Process assessment is carried out using both an assessment model and an assessment method. In some instances, the term "appraisal" is used instead of assessment, and the term "capability evaluation" is used when the appraisal is for the purpose of awarding a contract.
An assessment model captures what is recognized as good practices. These practices may pertain to technical software engineering activities only, or may also refer to, for example, management, systems engineering, and human resources management activities as well.
ISO/IEC 15504 [ISO15504-98] defines an exemplar assessment model and conformance requirements on other assessment models. Specific assessment models available and in use are SW-CMM (SEI95), CMMI [SEI01], and Bootstrap [Sti99]. Many other capability and maturity models have been defined—for example, for design, documentation, and formal methods, to name a few. ISO 9001 is another common assessment model which has been applied by software organizations (ISO9001-00).
A maturity model for systems engineering has also been developed, which would be useful where a project or organization is involved in the development and maintenance of systems, including software (EIA/IS731-99).
The applicability of assessment models to small organizations is addressed in [Joh99; San98].
There are two general architectures for an assessment model that make different assumptions about the order in which processes must be assessed: continuous and staged (Pau94). They are very different, and should be evaluated by the organization considering them to determine which would be the most pertinent to their needs and objectives.
In order to perform an assessment, a specific assessment method needs to be followed to produce a quantitative score which characterizes the capability of the process (or maturity of the organization).
The CBA-IPI assessment method, for example, focuses on process improvement (Dun96), and the SCE method focuses on evaluating the capability of suppliers (Bar95). Both of these were developed for the SW-CMM. Requirements on both types of methods which reflect what are believed to be good assessment practices are provided in [ISO15504-98], (Mas95). The SCAMPI methods are geared toward CMMI assessments [SEI01]. The activities performed during an assessment, the distribution of effort on these activities, as well as the atmosphere during an assessment are different when they are for improvement than when they are for a contract award.
There have been criticisms of process assessment models and methods, for example (Fay97; Gra98). Most of these criticisms have been concerned with the empirical evidence supporting the use of assessment models and methods. However, since the publication of these articles, there has been some systematic evidence supporting the efficacy of process assessments. (Cla97; Ele00; Ele00a; Kri99)
While the application of measurement to software engineering can be complex, particularly in terms of modeling and analysis methods, there are several aspects of software engineering measurement which are fundamental and which underlie many of the more advanced measurement and analysis processes. Furthermore, achievement of process and product improvement efforts can only be assessed if a set of baseline measures has been established.
Measurement can be performed to support the initiation of process implementation and change or to evaluate the consequences of process implementation and change, or it can be performed on the product itself.
Key terms on software measures and measurement methods have been defined in ISO/IEC 15939 on the basis of the ISO international vocabulary of metrology. ISO/IEC 15359 also provides a standard process for measuring both process and product characteristics. [VIM93]
Nevertheless, readers will encounter terminological differences in the literature; for example, the term "metric" is sometimes used in place of "measure."
The term "process measurement" as used here means that quantitative information about the process is collected, analyzed, and interpreted. Measurement is used to identify the strengths and weaknesses of processes and to evaluate processes after they have been implemented and/or changed.
Process measurement may serve other purposes as well. For example, process measurement is useful for managing a software engineering project. Here, the focus is on process measurement for the purpose of process implementation and change.
The path diagram in Figure 2 illustrates an important assumption made in most software engineering projects, which is that usually the process has an impact on project outcomes. The context affects the relationship between the process and process outcomes. This means that this process-to-process outcome relationship depends on the context.
Not every process will have a positive impact on all outcomes. For example, the introduction of software inspections may reduce testing effort and cost, but may increase elapsed time if each inspection introduces long delays due to the scheduling of large inspection meetings. (Vot93) Therefore, it is preferable to use multiple process outcome measures which are important to the organization's business.
While some effort can be made to assess the utilization of tools and hardware, the primary resource that needs to be managed in software engineering is personnel. As a result, the main measures of interest are those related to the productivity of teams or processes (for example, using a measure of function points produced per unit of person-effort) and their associated levels of experience in software engineering in general, and perhaps in particular technologies. [Fen98: c3, c11; Som05: c25]
Process outcomes could, for example, be product quality (faults per KLOC (Kilo Lines of Code) or per Function Point (FP)), maintainability (the effort to make a certain type of change), productivity (LOC (Lines of Code) or Function Points per person-month), time-to-market, or customer satisfaction (as measured through a customer survey). This relationship depends on the particular context (for example, size of the organization or size of the project).
In general, we are most concerned about process outcomes. However, in order to achieve the process outcomes that we desire (for example, better quality, better maintainability, greater customer satisfaction), we have to implement the appropriate process.
Of course, it is not only the process that has an impact on outcomes. Other factors, such as the capability of the staff and the tools that are used, play an important role. When evaluating the impact of a process change, for example, it is important to factor out these other influences. Furthermore, the extent to which the process is institutionalized (that is, process fidelity) is important, as it may explain why "good" processes do not always give the desired outcomes in a given situation.
Figure 2 Path diagram showing the relationship between process and outcomes (results).
Software Product Measurement [ISO9126-01]
Software product measurement includes, notably, the measurement of product size, product structure, and product quality.
Software product size is most often assessed by measures of length (for example, lines of source code in a module, pages in a software requirements specification document), or functionality (for example, function points in a specification). The principles of functional size measurement are provided in IEEE Std 14143.1. International standards for functional size measurement methods include ISO/IEC 19761, 20926, and 20968 [IEEE 14143.1-00; ISO19761-03; ISO20926-03; ISO20968-02].
A diverse range of measures of software product structure may be applied to both high- and low-level design and code artifacts to reflect control flow (for example the cyclomatic number, code knots), data flow (for example, measures of slicing), nesting (for example, the nesting polynomial measure, the BAND measure), control structures (for example, the vector measure, the NPATH measure), and modular structure and interaction (for example, information flow, tree-based measures, coupling and cohesion). [Fen98: c8; Pre04: c15]
As a multi-dimensional attribute, quality measurement is less straightforward to define than those above. Furthermore, some of the dimensions of quality are likely to require measurement in qualitative rather than quantitative form. A more detailed discussion of software quality measurement is provided in the Software Quality KA, topic 3.4. ISO models of software product quality and of related measurements are described in ISO 9126, parts 1 to 4 [ISO9126-01]. [Fen98: c9,c10; Pre04: c15; Som05: c24]
The quality of the measurement results (accuracy, reproducibility, repeatability, convertibility, random measurement errors) is essential for the measurement programs to provide effective and bounded results. Key characteristics of measurement results and related quality of measuring instruments have been defined in the ISO International vocabulary on metrology. [VIM93]
The theory of measurement establishes the foundation on which meaningful measurements can be made. The theory of measurement and scale types is discussed in [Kan02]. Measurement is defined in the theory as "the assignment of numbers to objects in a systematic way to represent properties of the object."
An appreciation of software measurement scales and the implications of each scale type in relation to the subsequent selection of data analysis methods is especially important. [Abr96; Fen98: c2; Pfl01: c11] Meaningful scales are related to a classification of scales. For those, measurement theory provides a succession of more and more constrained ways of assigning the measures. If the numbers assigned are merely to provide labels to classify the objects, they are called nominal. If they are assigned in a way that ranks the objects (for example, good, better, best), they are called ordinal. If they deal with magnitudes of the property relative to a defined measurement unit, they are called interval (and the intervals are uniform between the numbers unless otherwise specified, and are therefore additive). Measurements are at the ratio level if they have an absolute zero point, so ratios of distances to the zero point are meaningful.
As the data are collected and the measurement repository is populated, we become able to build models using both data and knowledge.
These models exist for the purposes of analysis, classification, and prediction. Such models need to be evaluated to ensure that their levels of accuracy are sufficient and that their limitations are known and understood. The refinement of models, which takes place both during and after projects are completed, is another important activity.
Model building includes both calibration and evaluation of the model. The goal-driven approach to measurement informs the model building process to the extent that models are constructed to answer relevant questions and achieve software improvement goals. This process is also influenced by the implied limitations of particular measurement scales in relation to the choice of analysis method. The models are calibrated (by using particularly relevant observations, for example, recent projects, projects using similar technology) and their effectiveness is evaluated (for example, by testing their performance on holdout samples). [Fen98: c4,c6,c13;Pfl01: c3,c11,c12; Som05: c25]
Model implementation includes both interpretation and refinement of models–the calibrated models are applied to the process, their outcomes are interpreted and evaluated in the context of the process/project, and the models are then refined where appropriate. [Fen98: c6; Pfl01: c3,c11,c12; Pre04: c22; Som05: c25]
Measurement techniques may be used to analyze software engineering processes and to identify strengths and weaknesses. This can be performed to initiate process implementation and change, or to evaluate the consequences of process implementation and change.
The quality of measurement results, such as accuracy, repeatability, and reproducibility, are issues in the measurement of software engineering processes, since there are both instrument-based and judgmental measurements, as, for example, when assessors assign scores to a particular process. A discussion and method for achieving quality of measurement are presented in [Gol99].
Process measurement techniques have been classified into two general types: analytic and benchmarking. The two types of techniques can be used together since they are based on different types of information. (Car91)
The analytical techniques are characterized as relying on "quantitative evidence to determine where improvements are needed and whether an improvement initiative has been successful." The analytical type is exemplified by the Quality Improvement Paradigm (QIP) consisting of a cycle of understanding, assessing, and packaging [SEL96]. The techniques presented next are intended as other examples of analytical techniques, and reflect what is done in practice. [Fen98; Mus99], (Lyu96; Wei93; Zel98) Whether or not a specific organization uses all these techniques will depend, at least partially, on its maturity.
The second type of technique, benchmarking, "depends on identifying an ‘excellent' organization in a field and documenting its practices and tools." Benchmarking assumes that if a less-proficient organization adopts the practices of the excellent organization, it will also become excellent. Benchmarking involves assessing the maturity of an organization or the capability of its processes. It is exemplified by the software process assessment work. A general introductory overview of process assessments and their application is provided in (Zah98).