The Community for Technology Leaders

Guest Editors' Introduction: Software Tools Assessment

Krishna Kavi, University of Texas at Arlington
Ez

Pages: pp. 23-26

Abstract—Developing quality software "just in time" has been one of the most significant challenges of the 1990s. Software tools and new software methodologies can together play a ke role in achieving a higher level of software quality and productivity. Software quality can be greatly improved by selecting a correct development tool to assist in each phase of the development process, from requirements analysis to final testing and integration. Selecting an inappropriate tool, on the other hand, can actually hinder software development.


Most software projects use a range of tools that are typically assembled over a long period of time by different team members. The tools may have been modified to fit the project at hand. This often makes the tools difficult to use in other projects and to upgrade in newer versions.

The decision to adopt a new or more advanced tool for a given project is often based on an intuitive understanding of its expected benefits. The insertion of a new tool usually incurs substantial costs, including

  • acquisition expenditure,
  • training project members to use the tool,
  • converting current designs and data from one format to another,
  • re-entering some data from existing designs into new designs, and
  • writing ad hoc scripts and filters for conversions between different tools or formats.

Thus, a project manager is understandably reluctant to consider a new tool or tool suite unless its benefits can be evaluated and presented in a convincing manner. All tools, either old or new, can support software development. However, the significance of a new tool on the overall productivity or quality of the resulting software is difficult to measure. The difficulty of assessing tools is compounded by the varied nature of the application systems for which they are used. For example, tools such as DARTS (Distributed Architecture for Real-Time Systems 1) are very valuable in the development of real-time and safety-critical systems. But such tools may have far less affect on the productivity or quality of software development for non-safety-critical applications.

KEY CRITERIA

What we need, then, is an increased awareness of how useful available tools are and how they can be applied to different applications and the different phases of a software project. We must develop simple, usable, and yet comprehensive criteria for the evaluation of software tools. Case studies and experience from users and experts must be presented to as wide an audience as possible. In pursuit of that goal, this special issue attempts to provide

  • information on the most up-to-date and innovative software development tools and methodologies available today,
  • criteria for qualitative and quantitative evaluation of tools and methodologies, and
  • case studies on the application of the evaluation criteria.

In assessing the quality of a software tool, you could start with Standards such as ISO 9126. 2 However, as Alan Brown argues, evaluating a specific CASE tool is different from evaluating a CASE environment provided by a suite of tools. 3 For example, the "integration" of the component tools in terms of their cohesiveness, interface compatibility, and shared conceptual notions is appropriate only in the case of multiple tools.

We go one step further and state that it is not feasible to design a single set of criteria that can be used to evaluate all software tools and environments. The current approach taken by the software community is to provide evaluation guidelines whereby different evaluation criteria may be developed to suit different software projects. For example, the ISO 9126 standard proposes the following characteristics to assess a software product's quality:

  • Functionality. Does the software product exhibit required functions to satisfy particular needs?
  • Usability. How much effort is needed to use the software?
  • Reliability. How capable is the software product in maintaining its level of performance?
  • Efficiency. What resources are needed to maintain the required level of performance?
  • Maintainability. What effort is needed to make specific changes to the product?
  • Portability. What effort is needed to transfer the product from one operating environment to another?

These characteristics can be used as guidelines to develop more specific evaluation criteria for software tools.

APPLYING CRITERIA

It is not enough, however, simply to design evaluation criteria; they must also be applied. A metric is only useful if it compares different tools, either qualitatively or quantitatively, for a specific characteristic. Applying an evaluation metric to a tool may itself require other tools. Alternatively, customers can be surveyed to evaluate the impact of software tools in their work environments. For example, one study evaluated against the ISO 9000 standard 17 different CASE tools used by German software industry. 4 The evaluation is based on a carefully conducted survey of the industry in which customers were asked to name a best tool and a worst tool for each of several well-chosen criteria.

The impact of a tool on the quality or productivity of a project depends not only on the characteristics of the other tools used but also on the characteristics of the project itself. Elsewhere in this issue, Tillman Bruckhaus presents some interesting case studies showing the impact of project size on overall productivity. His study also shows that productivity is affected by the development process used. For example, the use of a sophisticated tool in a project that relies on a simple development process may actually decrease the productivity. This is intuitively appealing since a development process that does not need sophisticated tools can be hampered by the steep learning curve they require.

The evaluation of tools and methodologies for their effect on developing quality software and improving the productivity of the development process is finally receiving the attention it deserves. This is reflected in recent standards and in the experience reports and surveys appearing in conference proceedings and journals.

However, more research in the careful crafting of guidelines for the assessment of software tools is required. These criteria should be applied to a wider range of tools and application environments. The results of such evaluations should be made available to a wider audience. The advent of the Internet and World Wide Web should facilitate rapid dissemination of tool assessment results. We hope that the designers and users of software tools will share their evaluations and experiences with the software community. ✦

The Impact of Tools on Software Productivity, pp. 29-37

Tilmann Bruckhaus, Nazim H. Madhavji, Ingrid Fanssen, and John Henshaw

To stay competitive, software organizations must continuously improve product quality and customer satisfaction, as well as lower software development costs and shorten delivery time. One way to do this is to adopt appropriate software tools. However, to make effective use of specific tools you should first understand how a tool will affect these critical variables in your project.

Because we don't know how to analyze a tool's impact on specific projects, we generally adopt them based on an intuitive understanding of their expected impact. In many cases, the actual results of this practice are disappointing. The problem is aggravated because tool adoption often brings considerable costs.

The authors did a case study on the impact of tool insertion in ongoing software projects. The result of their study was a method that organizations can use to assess the impact of tool insertion on software productivity.

A Framework for Evaluating Software Technology, pp. 39-49

Alan W. Brown and Kurt C. Wallnau

Software development organizations continually make decision on how to select, apply, and introduce software technology. Companies decide on some technologies explicitly after examining alternatives in detail; other technologies are selected with little study of the decision's potential impact. In both cases, the organization attempts to understand and balance competing concerns regarding the new technology. These concerns include acquisition costs, the technology's effect on quality and time to market, and the training and support services it will require.

Attempts to calculate the return on investment of software technology have generally failed. The foremost reason is the difficulty in establishing cause and effect when assessing new software technologies' impact on an organization. The authors' experimental framework can help companies evaluate a new software technology by examining its features in relation to its peers and competitors through a systematic approach that includes modeling and experiments.

Performance Testing a Large Finance Application, pp. 50-53

David Grossman, M. Catherine McCabe, Christopher Staton, Bret Bailey, Ophir Frieder, and David Roberts

Developers have been performance testing large database applications for more than 20 years. Up to now, they have focused primarily on system-level testing of operating systems and database management systems. However, just because a machine, its operating system, and its database system are fast does not mean that an application using the database system will perform well. Design flaws in the application or in the tuning parameters specific to the application's environment often result in serious performance problems.

The authors present a case study that shows how a simple prototype can be used to verify, before production, that a system will perform at an acceptable level under realistic conditions. Their approach hinges upon the use of a simple prototype to verify that the system performance falls within acceptable bounds.

Emerald: Software Metrics and Models on the Desktop, pp. 55-59

John P. Hudelpohl, Stephen F. Aud, Taghi M. Khoshgoftaar, Edward B. Allen, and Jean Mayrand

As software becomes more and more sophisticated, industry has begun to place a premium on software reliability. Consequently, software reliability is a strategic business weapon in an increasingly competitive marketplace. In response to these concerns, BNR, Nortel, and Bell Canada recently developed Emerald, a decision support system designed to improve telecommunications software reliability. Emerald efficiently integrates software measurements, quality models, and delivery of results to the desktop of software developers.

Emerald not only improves software reliability, but also facilitates the accurate correction of field problems. The authors' experiences developing Emerald taught them valuable lessons about the implementation and adoption of this type of software tool, which they present here.

Tools Faire: Untangling the Web with Web and Client/Server Development Tools, pp. 61-72

Alan Chmura and David Sharon

Migrating legacy systems and developing new systems for client/server environments has dominated the software development tool market in the '90s. In the last two years, the Internet's surging popularity has led many established tool developers to provide Web functionality for their legacy migration and client/server tools. The authors provide a sampling of what's available in the many new and upgraded tools available to help you create client/server and Internet applications, along with tips for picking those best suited to your needs.

ACKNOWLEDGMENTS

We thank Al Davis and the anonymous referees who helped with the review process, without whose help this special issue would not have been possible.

REFERENCES



About the Authors

Krishna Kavi is a professor of computer science and engineering at the University of Texas at Arlington. From 1993 to 1995, he was a program director for Operating Systems and Compilers programs at the National Science Foundation. Kavi edited IEEE CS Press Tutorials on Real-Time Systems and Scheduling and Load Balancing. He has been active in the areas of formal specification and verification of concurrent software systems, software tools for parallel processing, and models and tools for performance and reliability of concurrent processing systems using dataflow graphs and Petri nets. His other research interests include dataflow and multithreaded computer systems, concurrent object-oriented languages, and software reliability. Kavi is on the editorial board of IEEE Transactions on Computers. He was an IEEE CS Distinguished Visitor from 1989 to 1992 and an editor of the IEEE CS Press from 1988 to 1992. He also served on the program committees of several conferences, including the International Conference on Computer Architecture, the International Conference on Distributed Computer Systems, the Symposium on Parallel and Distributed Processing, and the Symposium on Assessment of Quality Software Development Tools.
Kavi received a BE in electrical engineering from the Indian Institute of Science and an MS and PhD in Computer Science from Southern Methodist University. He is a senior member of the IEEE and a member of ACM.
Ez Nahouraii is an independent education consultant. During the past 31 years he was with IBM, serving as a Senior Instructor at the IBM System Research Education Center in New York; as a staff member in the Computer Science Department of IBM T.J. Watson Research, Yorktown Heights, New York; and as a member of the Advanced Technology Group in San Jose, California. He also lectured on behalf of IBM at various universities around the world. He has been an adjunct professor at UCLA and Santa Clara University. Nahouraii has authored or co-authored several technical publications and technical disclosures. He edited five IBM Proceedings of the Productivity Tools Symposium. He has served as editor-in-chief of the IEEE Computer Society Press, as general and program chair of the Symposium on Assessment of Software Tools, and on the Educational Activity Board.
Nahouraii received an Outstanding Contribution Award from the IEEE Computer Society.
FULL ARTICLE
56 ms
(Ver 3.x)