Guest Editor's Introduction

 

Maurizio Morisio By Maurizio Morisio


Software professionals know how hard it is to manage successfully a project, and how easy it is to overrun budget or deadlines or not to meet customer expectations. In the late '80s and early '90s, the answer to these problems was, "Do more"—more process, more documentation. This approach has never been very popular among developers, who are more interested in developing code. Also, the usual problems continued to appear.


Since the late '90s, the approach advocated by the agile movement has been, "Do less." Or maybe, "Do smarter," using fewer documents and more automation. Smarter requirements, such as interaction with the customers and test cases, instead of requirement documents. Automated test cases, instead of test documents. Smarter, continuous quality assurance such as pair programming and test driven development. Smarter design with simple designs and refactoring and continuous integration, instead of design documents and integration plans.


I've gathered several articles that show evidence of how the agile approach works, what its results are, and how practitioners can use it. Two of these articles, from IEEE Software magazine, will be available for free on Computing Now throughout June. "Tests and Requirements, Requirements and Tests: A Möbius Strip," by Robert C. Martin and Grigori Melnik, is an introduction to FIT tables, a technique to define test cases as requirements (or requirements as test cases). "Agile Requirements Engineering Practices: An Empirical Study," by Lan Cao and Balasubramaniam Ramesh, provides an analysis of agile requirements engineering approaches, all based on an iterative discovery approach and intense communication between developers and customers. A third free article, from IT Professional, gives an overview of agile software development.


I've also provided links to several other agile-themed articles from IEEE Software and Computer. This figure provides a guide to reading these articles.


Maurizio Morisio is an associate professor in the Department of Automation and Computer Science, Politecnico di Torino. He's IEEE Software's associate editor in chief for online initiatives. Contact him at maurizio dot morisio at polito dot it.



Theme — AGILE METHODS

   

Tests and Requirements, Requirements and Tests: A Möbius Strip

By Robert C. Martin and Grigori Melnik
From the January/February 2008 issue of IEEE Software

Surprisingly, to some people, one of the most effective ways of testing requirements is with test cases very much like those for testing the completed system. — Donald C. Gause and Gerald M. Weinberg

When Donald Gause and Gerald Weinberg wrote this statement, they were asserting that writing tests is an effective way to test requirements' completeness and accuracy. They also suggest writing these tests when gathering, analyzing, and verifying requirements—long before those requirements are coded.


Agile Requirements Engineering Practices: An Empirical Study

By Lan Cao and Balasubramaniam Ramesh
From the January/February 2008 issue of IEEE Software

The rapidly changing business environment in which most organizations operate is challenging traditional requirements-engineering (RE) approaches. Software development organizations often must deal with requirements that tend to evolve quickly and become obsolete even before project completion. Rapid changes in competitive threats, stakeholder preferences, development technology, and time-to-market pressures make prespecified requirements inappropriate.


Agile Software Development: Ad Hoc Practices or Sound Principles?

By Lan Cao and Balasubramaniam Ramesh
From the March/April 2007 issue of IT Professional

Agile software development challenges traditional software development approaches. Rapidly changing environments characterized by evolving requirements and tight schedules require software developers to take an agile approach. Most software development organizations must deal with quickly evolving requirements that tend to become obsolete even before project completion. Constant changes in competitive threats, stakeholder preferences, and software development technology, as well as time-to-market pressures, severely challenge system development using traditional plan-based approaches.A group of agile methods that seeks to address this changing context has gained a lot of attention ("Get Ready for Agile Methods, with Care," B.Boehm, Computer, vol. 35, no. 1, 2002, pp. 64–69). These methods include practices such as short iterations, frequent releases, simple and emerging design, peer review, and on-site customer participation.


 


What's New

   

Overview of Debug Standardization Activities

By Bart Vermeulen, Rolf Kühnis, Jeff Rearick, Neal Stollon, and Gary Swoboda
From the May/June 2008 issue of IEEE Design & Test of Computers

Over the past decade, the semiconductor industry has seen a significant increase in the complexity of embedded systems. Multiple heterogeneous processor cores are now on a single die, together with hard-wired accelerators and dedicated or general-purpose I/O peripherals. The code size of the software that runs on these embedded processors is also increasing dramatically. As a consequence, a new combination of embedded hardware and software often must be debugged. Because of the many different components that make up an embedded system, debugging has become a multidisciplinary problem, requiring detailed knowledge of system hardware, embedded software, and debug equipment and tools. Each of these components requires its own expertise, making it vital that all components fit together seamlessly.


Multicore Resource Management

By Kyle J. Nesbit, James E. Smith, Miquel Moreto, Francisco J. Cazorla, Alex Ramirez, and Mateo Valero
From the May–June 2008 issue of IEEE Micro

Continuing the long-term trend of increasing integration, the number of cores per chip is projected to increase with each successive technology generation. These chips yield increasingly powerful systems with reduced cost and improved efficiency. At the same time, general-purpose computing is moving off desktops and onto diverse devices such as cell phones, digital entertainment centers, and data-center servers. These computers must have the key features of today's general-purpose systems (high performance and programmability) while satisfying increasingly diverse and stringent cost, power, and real-time performance constraints.


Using VR for Human Development in Africa

By Dave Lockwood and Erik Kruger
From the May/June 2008 issue of IEEE Computer Graphics and Applications

Most VR practitioners probably don't think of Africa as being at the forefront of technology innovation. Some might even regard Africa as an exotic holiday destination; to others, it's a continent characterized by poverty and hunger, to be avoided. Within this context, what role, if any, can VR play in human development in Africa?


Hand-Gesture Computing for the Hearing and Speech Impaired

By Gaurav Pradhan, Balakrishnan Prabhakaran, and Chuanjun Li
From the April–June 2008 issue of IEEE MultiMedia

Humans communicate with each other using speech, hand gestures, facial expressions, and sign language. It seems natural to use the same type of interactions when communicating with computers. This becomes all the more important for humans challenged with hearing and speech impairments. For example, several applications in education and entertainment—such as interactive learning modules, lecture presentations, and 3D games—have speech input-and-output capabilities. Speech- and hearing-impaired humans require real-time sign language gesture recognition and generation to use these applications.


The PlayStation 3 for High-Performance Scientific Computing

By Jakub Kurzak, Alfredo Buttari, Piotr Luszczek, and Jack Dongarra
From the May/June 2008 issue of Computing in Science & Engineering

The heart of the Sony PlayStation 3—the Cell processor—wasn't originally intended for scientific number crunching, just as the PlayStation 3 itself wasn't meant primarily to serve such purposes. Yet, both these items could impact the high-performance computing world in significant ways. This introductory article takes a closer look at their potential to do so; an extended version of it is published as a University of Tennessee technical report.


Multilanguage Programming

By Steve Vinoski
From the May/June 2008 issue of IEEE Internet Computing

Have you ever worked on an integration project with a developer who possesses seemingly limitless knowledge, wisdom, and experience with the various systems and techniques required? We've all heard of — and might even know — individuals from various professions who are considered to be exceptional at what they do. Whether they're athletes, actors, mechanics, or software developers who focus on integration, such people all possess extensive vocabularies. Here, the term "vocabulary" refers not to spoken or written words, but more abstractly to the tools, tactics, and techniques pertinent to each profession. Top basketball players know multiple ways to help their teams with scoring, passing, and defense, and they can adapt their games as needed. Virtuoso musicians are skilled at multiple instruments, styles, and techniques. Seasoned integration developers tend to be knowledgeable in a variety of technical areas, not only because they're exposed to many technologies over time but also because they often face project pressures of getting disparate systems to work together.


From the Office Document Format Battlefield

By Jirka Kosek
From the May/June 2008 issue of IT Professional

If you regularly use an office application suite, you might have noticed changes over the past few years in the default formats for saving documents. From the user perspective, this isn't all that important—at least until you have problems opening a document from another system—but those small changes to the file extensions of office documents actually represent substantial changes to the underlying document representations. These changes are the result of a big movement in the IT industry. Indeed, it's a movement that could have a very interesting impact on how we handle digital content.

Recent versions of office suites are using new XML-based formats. OpenOffice.org and GoogleDocs, for example, use the Open Document Format (ODF) and Microsoft Office 2007 uses Office Open XML (OOXML). Both formats have now been accepted as international standards; this article outlines the history of the process that has left us with two functionally similar, but not fully compatible document formats.


AI's 10 to Watch

By James Hendler, Philipp Cimiano, Dmitri Dolgov, Anat Levin, Brian Milch, Peter Mika, Louis-Philippe Morency, Boris Motik, Jennifer Neville, Erik Sudderth, and Luis Von Ahn
From the May/June 2008 issue of IEEE Intelligent Systems

The field of computer science is in a struggle to define its core in a ubiquitous—computing world where pervasive computer use and end-user programming give us cause to question some basic assumptions of our paradigms. In AI, however, our intellectual challenges remain largely unchanged. The quest to understand intelligence, whether by cognitive emulation or engineering fiat, remains one of the true frontiers available to modern science. Despite increasingly powerful computers, robots' growing capabilities, and the development of new and better methods for combining information, the quest for intelligence continues to elude us. The need remains constant for creative leaders who will take us into new AI paradigms or find ways to innovate in those paradigms we're already exploring.


Camera Phones: A Snapshot of Research and Applications

By Franklin Reynolds
From the April–June 2008 issue of IEEE Pervasive Computing

Once considered science fiction, mobile phones with digital cameras are now inexpensive, widely available, and very popular. Significantly more camera phones are sold each year than dedicated digital cameras. According to Strategy Analytics, it's likely that over one billion camera phones were sold last year (see http://ce.tekrati.com/research/9039/). Surprisingly, the tremendous popularity of camera phones has caused little controversy. A few companies and institutions have banned camera phones owing to security concerns, and there has been some abuse of personal privacy, but for the most part, there has been worldwide acceptance of this new technology.


Digital Rights Management and Individualized Pricing

By Michael Lesk
From the May/June 2008 issue of IEEE Security & Privacy

Why do online music stores sell to each customer at the same price? Scholars such as Andrew Odlyzko and Wendy Seltzer have both commented on the possibility that digital rights management could enable price discrimination, selling to different customers at different prices. Yet, several years later, it still hasn’t happened. Why not?


Know Your Audience

By David Alan Grier
From the May 2008 issue of Computer

I rarely see these things coming. I am usually so focused on the moment that the first sign slips past me, like an energetic puppy dashing out the screen door. They usually occur at a reception of some sort that is meant to promote the fortunes of the institution that employs me. We are in a large room. Music is playing in the background. I am standing in a small circle of people, shaking hands, offering greetings, and saying how pleased I am that so many people have come this evening.

At some point, someone will lean into the circle and announce in a fairly pleasant way that they have a story to tell. As one who always appreciates a good story, I turn my attention to the speaker, forgetting that this is the first step down that road paved with good intentions.


Ultra Low-Cost PCs Redraw the OS Wars

By Greg Goth
From the June 2008 issue of IEEE Distributed Systems Online

The Single-Era Conjecture. That's the name San Jose State University professor and technology author Randall Stross has given Microsoft's thus-far thwarted attempts to buy Yahoo and the less-than-enthusiastic reception greeting its Vista operating system. In a recent New York Times column (http://www.nytimes.com/2008/05/18/technology/18digi.html?_r=1&oref=slogin), Stross explains the Single-Era Conjecture as "the invisible law that makes it impossible for a company in the computer business to enjoy pre-eminence that spans two technological eras."