Guest Editors' Introduction: Software Tools Assessment
SEPTEMBER 1996 (Vol. 13, No. 5) pp. 23-26
0740-7459/96/$31.00 © 1996 IEEE

Published by the IEEE Computer Society
Guest Editors' Introduction: Software Tools Assessment
Krishna University of Texas at Arlington

Ez Nahouraii Independent Consultant
  Article Contents  
  Key Criteria  
  Applying Criteria  
  REFERENCES  
Download Citation
   
Download Content
 
PDFs Require Adobe Acrobat
 

Developing quality software "just in time" has been one of the most significant challenges of the 1990s. Software tools and new software methodologies can together play a ke role in achieving a higher level of software quality and productivity. Software quality can be greatly improved by selecting a correct development tool to assist in each phase of the development process, from requirements analysis to final testing and integration. Selecting an inappropriate tool, on the other hand, can actually hinder software development.

Most software projects use a range of tools that are typically assembled over a long period of time by different team members. The tools may have been modified to fit the project at hand. This often makes the tools difficult to use in other projects and to upgrade in newer versions.
The decision to adopt a new or more advanced tool for a given project is often based on an intuitive understanding of its expected benefits. The insertion of a new tool usually incurs substantial costs, including
  • acquisition expenditure,
  • training project members to use the tool,
  • converting current designs and data from one format to another,
  • re-entering some data from existing designs into new designs, and
  • writing ad hoc scripts and filters for conversions between different tools or formats.
Thus, a project manager is understandably reluctant to consider a new tool or tool suite unless its benefits can be evaluated and presented in a convincing manner. All tools, either old or new, can support software development. However, the significance of a new tool on the overall productivity or quality of the resulting software is difficult to measure. The difficulty of assessing tools is compounded by the varied nature of the application systems for which they are used. For example, tools such as DARTS (Distributed Architecture for Real-Time Systems 1 ) are very valuable in the development of real-time and safety-critical systems. But such tools may have far less affect on the productivity or quality of software development for non-safety-critical applications.
Key Criteria
What we need, then, is an increased awareness of how useful available tools are and how they can be applied to different applications and the different phases of a software project. We must develop simple, usable, and yet comprehensive criteria for the evaluation of software tools. Case studies and experience from users and experts must be presented to as wide an audience as possible. In pursuit of that goal, this special issue attempts to provide
  • information on the most up-to-date and innovative software development tools and methodologies available today,
  • criteria for qualitative and quantitative evaluation of tools and methodologies, and
  • case studies on the application of the evaluation criteria.
In assessing the quality of a software tool, you could start with Standards such as ISO 9126. 2 However, as Alan Brown argues, evaluating a specific CASE tool is different from evaluating a CASE environment provided by a suite of tools. 3 For example, the "integration" of the component tools in terms of their cohesiveness, interface compatibility, and shared conceptual notions is appropriate only in the case of multiple tools.
We go one step further and state that it is not feasible to design a single set of criteria that can be used to evaluate all software tools and environments. The current approach taken by the software community is to provide evaluation guidelines whereby different evaluation criteria may be developed to suit different software projects. For example, the ISO 9126 standard proposes the following characteristics to assess a software product's quality:
  • Functionality. Does the software product exhibit required functions to satisfy particular needs?
  • Usability. How much effort is needed to use the software?
  • Reliability. How capable is the software product in maintaining its level of performance?
  • Efficiency. What resources are needed to maintain the required level of performance?
  • Maintainability. What effort is needed to make specific changes to the product?
  • Portability. What effort is needed to transfer the product from one operating environment to another?
These characteristics can be used as guidelines to develop more specific evaluation criteria for software tools.
Applying Criteria
It is not enough, however, simply to design evaluation criteria; they must also be applied. A metric is only useful if it compares different tools, either qualitatively or quantitatively, for a specific characteristic. Applying an evaluation metric to a tool may itself require other tools. Alternatively, customers can be surveyed to evaluate the impact of software tools in their work environments. For example, one study evaluated against the ISO 9000 standard 17 different CASE tools used by German software industry. 4 The evaluation is based on a carefully conducted survey of the industry in which customers were asked to name a best tool and a worst tool for each of several well-chosen criteria.
The impact of a tool on the quality or productivity of a project depends not only on the characteristics of the other tools used but also on the characteristics of the project itself. Elsewhere in this issue, Tillman Bruckhaus presents some interesting case studies showing the impact of project size on overall productivity. His study also shows that productivity is affected by the development process used. For example, the use of a sophisticated tool in a project that relies on a simple development process may actually decrease the productivity. This is intuitively appealing since a development process that does not need sophisticated tools can be hampered by the steep learning curve they require.
The evaluation of tools and methodologies for their effect on developing quality software and improving the productivity of the development process is finally receiving the attention it deserves. This is reflected in recent standards and in the experience reports and surveys appearing in conference proceedings and journals.
However, more research in the careful crafting of guidelines for the assessment of software tools is required. These criteria should be applied to a wider range of tools and application environments. The results of such evaluations should be made available to a wider audience. The advent of the Internet and World Wide Web should facilitate rapid dissemination of tool assessment results. We hope that the designers and users of software tools will share their evaluations and experiences with the software community. ✦
We thank Al Davis and the anonymous referees who helped with the review process, without whose help this special issue would not have been possible.

REFERENCES

Krishna Kavi is a professor of computer science and engineering at the University of Texas at Arlington. From 1993 to 1995, he was a program director for Operating Systems and Compilers programs at the National Science Foundation. Kavi edited IEEE CS Press Tutorials on Real-Time Systems and Scheduling and Load Balancing. He has been active in the areas of formal specification and verification of concurrent software systems, software tools for parallel processing, and models and tools for performance and reliability of concurrent processing systems using dataflow graphs and Petri nets. His other research interests include dataflow and multithreaded computer systems, concurrent object-oriented languages, and software reliability. Kavi is on the editorial board of IEEE Transactions on Computers. He was an IEEE CS Distinguished Visitor from 1989 to 1992 and an editor of the IEEE CS Press from 1988 to 1992. He also served on the program committees of several conferences, including the International Conference on Computer Architecture, the International Conference on Distributed Computer Systems, the Symposium on Parallel and Distributed Processing, and the Symposium on Assessment of Quality Software Development Tools.Kavi received a BE in electrical engineering from the Indian Institute of Science and an MS and PhD in Computer Science from Southern Methodist University. He is a senior member of the IEEE and a member of ACM.
Ez Nahouraii is an independent education consultant. During the past 31 years he was with IBM, serving as a Senior Instructor at the IBM System Research Education Center in New York; as a staff member in the Computer Science Department of IBM T.J. Watson Research, Yorktown Heights, New York; and as a member of the Advanced Technology Group in San Jose, California. He also lectured on behalf of IBM at various universities around the world. He has been an adjunct professor at UCLA and Santa Clara University. Nahouraii has authored or co-authored several technical publications and technical disclosures. He edited five IBM Proceedings of the Productivity Tools Symposium. He has served as editor-in-chief of the IEEE Computer Society Press, as general and program chair of the Symposium on Assessment of Software Tools, and on the Educational Activity Board.Nahouraii received an Outstanding Contribution Award from the IEEE Computer Society.