The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.03 - May/June (2003 vol.20)
pp: 5-7
Published by the IEEE Computer Society
I recently participated in an email discussion with several senior software development experts. The discussion naturally turned to the use of software engineering best practices. One comment in particular struck me (paraphrased here):
All these … have strong followings to this day. … What worries me is "to this day"—just how long in the tooth are we getting here?
Many of the best practices we follow today were originally proposed decades ago. For instance, Walker Royce first described the famous Waterfall lifecycle model in 1970. 1 Michael Fagan first described his inspection method—arguably the most effective of all peer review methods—in the open literature in 1976. 2 The insights and techniques that the leaders of the day contributed were welcome revelations. However, how well have these contributions withstood the last three decades? Is classical software engineering as we tend to think of it obsolete? Have these techniques run their course?
SOFTWARE'S WONDER YEARS
To put things in perspective, consider the software industry in the early and mid-1970s when much of what we consider classical software engineering was developed. It differs so much from contemporary software development as to be almost unrecognizable.
Developers generally wrote applications for large, centralized systems, almost always funded by someone with deep pockets such as the military or large corporations. Systems were usually built to achieve automation—that is, to computerize some task that humans had been doing. So, the developers often could specify the expected behavior in advance (naturally this doesn't mean they always did) by simply observing the manual processes. I don't mean to imply that innovative applications weren't developed during this period nor that automation didn't have its own unique challenges. But most organizations bought into computing in the 1960s and 1970s to make processes previously done by people cheaper and faster.
If you would have asked the average software developer about user interfaces in 1970, he or she would have pointed you to an IBM 029 keypunch machine. Point and click indeed! In fact, the most common software system architecture comprised mainly centralized, batch-oriented automation projects. The system "user" was typically the person who read the reports it generated, seldom even considering issues such as navigation and usability. Programmers were furnished with "output sheets" made up of a grid with 55 lines and 132 columns—the number of lines and rows on a typical sheet of tractor-fed computer paper. This is how an application was "prototyped" for the user.
Programming environments?
Computer cycles were very expensive. In fact, one of my graduate school professors, John Metzner, suggested in 1980 that the most valuable computing invention that could be made would be a "CPU battery," where programmers could save up all the cycles that were otherwise wasted when the computer was idle. Although time-sharing systems were certainly available by 1970, they were by no means widely accepted as the de facto programming environment. A 1968 study reported (surprise!), "Statistically significant results indicated faster debugging under online conditions … ," yet it also listed several contemporary citations that rejected the use of online programming because it diverted resources from "productive data processing" to "unproductive uses." 3
These early "programming environments" were not particularly agile, especially if combined with the use of punch cards rather than online text editors. To conserve computer resources, programmers were taught to write their programs out on coding forms prior to entering the code into the editor or having it keypunched. If a simple coding error turned up after the program had been keypunched, inputting the line again and finding which card to replace in the deck were tedious and time-consuming activities. So, programmers usually did desk checking—pouring over the coding forms before keypunching to look for syntax and logic errors. Sometimes it would help to have an office mate look over the coding forms as well, a very basic form of peer review.
Captive consumers
There was no mass market consumer software as we know it today, because there was no consumer mass market. Most users were captives to the application because they needed to use it to perform their jobs, yet they had little say in its acquisition. I well remember my early courses in systems analysis and design, which carefully distinguished "customer" and "user."
Although a fair amount of hobbyist software was produced in the late 1970s, it was, well, hobbyist software. Probably the first true mass market consumer software didn't appear until Software Arts released Visicalc in 1977. IBM didn't introduce the PC until 1982, and it wasn't until 10 years later in 1992 that Microsoft released Windows 3.1 (the first version that allowed multitasking). In 1986, Borland shipped my all-time favorite development environment, Turbo Pascal 3 (compiler, integrated editor, and debugger), on a single 360-Kbyte (yes, K!) floppy disk. (You can still download Turbo Pascal 3 from Borland's Antique Software page at http://community.borland.com/article/0,1410,20792,00.html. The file size is 170,209 bytes.) I used this environment to develop SET Laboratories' first commercial product, PC-Metric for Pascal, which we released (complete with three-ring binder and a 5 1/4-inch floppy disk in a sandwich bag) in 1987.
With no mass market to contend with in the early days, second-guessing the limited user base most applications had was relatively easy.
Fast Forward to Today
Software development—both the market and the way we do it—has changed dramatically over the past 35 years. Do the lessons we learned during the 1970s and 1980s still apply?
Some members of our community say no. For instance, some of the principles behind the Agile Manifesto reject the concepts of the classical Waterfall lifecycle model. These principles embrace incremental delivery of working software and encourage changing requirements, an anathema to the Waterfall model. Lightweight informal reviews by programmer pairs replace the use of formal inspections.
The agile viewpoint stems from the myriad dramatic changes we have observed in the industry over the last few decades. Development environments are more agile. Delivery mechanisms nowadays consist of Web-based downloads. Are the days of phased life cycles and formal inspections over?
Littering and Software Engineering
A community can have three types of knowledge, according to C.W. Choo: 4

    Implicit knowledge represents individual or group experience and expertise and is difficult to share.

    Explicit knowledge is based on formal policies, procedures, instructions, and standards.

    Cultural knowledge is the basis for what we think is fair and trustworthy.

We can imagine a process in which the implicit knowledge of a few influential individuals is formalized and becomes explicit knowledge. After a certain period of consistent promulgation, this explicit knowledge becomes cultural knowledge.
A good example of this process involves rural roadside littering in the United States. Up until the 1960s, it was common for drivers to litter—it was simply considered an acceptable practice. I can recall road trips with my parents in the early 1960s in which cigarette butts, candy wrappers, and empty drink cans were nonchalantly tossed out the windows of passing cars.
Some influential people, such as US President Lyndon Johnson's wife, Lady Bird, had implicit knowledge that this was a disagreeable and offensive practice. This led to explicit knowledge, expressed in the form of a 1965 US federal law called the Highway Beautification Act. Within a couple decades, this explicit knowledge evolved into cultural knowledge. Today, most Americans wouldn't think of throwing an empty drink can from their car. It would be hard to imagine how society could be changed to view the notion of littering as acceptable again.
Many classical software engineering practices that were first converted from implicit knowledge to explicit knowledge in the 1970s have evolved into cultural knowledge. Today, most developers would reject the notion that you can develop a quality software application without progressing through a sequential, phased life cycle. New methods that seem to challenge this culturally accepted practice have transitioned from implicit knowledge to explicit knowledge, but only time will tell if their evolution will continue on to cultural knowledge. Until then, however, I don't think I'd write off classical software engineering techniques. They're cultural.
I'd like to hear from IEEE Software readers. Is classical software engineering over the hill? Please write to me at warren.harrison@computer.org.

References

6 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool