A new book on software engineering has unearthed an old term, “software crisis.” The author made a strong case that our programmers are not able to create the software that we need. There have “not been similar advances in relation to our software development capability” as there have been to our hardware, the authors claims. Thus, software has become “the critical limiting factor in realizing the potential of the advances” in hardware that we have made over the past decade (B. Fitzgerald, “Software Crisis 2.0,” Computer, vol. 45, no. 4, 2012, pp. 89-91).
Yet, if we look carefully at the idea of a software crisis, we quickly discover that the software industry has felt it has been in difficult circumstances since its inception. The fact that we again have people claiming that we are in a crisis state suggests that not only do we have more to do to improve software development but also that we are not really seeing the industry properly.
The idea of a software crisis can be traced back almost to the start of the computing era. During the first years of the computing age, roughly 1952 to 1958, few people understood the nature of programmer and fewer still anticipated the role of software in the computing industry. “Most people would have been surprised to learn that software would eventually account for as much as half the cost of the computing system,” reported German computer scientist F.L. Bauer (“Software and Software Engineering,” SIAM Review, vol. 15, no. 2, part 2, 1973, pp. 469-480). No one foresaw that software would become the dominant technology in computation.
The experience in the US and Europe was repeated in China. When the Chinese Academy of Science (CAS) wrote its first plan for computing, the 1956-1967 Program for Developing National Sciences and Technology, it placed much more emphasis on building computing machines than on programming these machines. It argued that China should have the goal of educating 500 to 600 students in computer manufacture each year and educate 10 to 25 “professional people” a year in “programming and computer technology” (Liaison Bureau of CAS, “CAS Attending the Soviet Conference of Magnetism Electrophoresis Physics and Organizing a Group to Investigate Computing Technology,” no. 2, The Archives of CAS, 56-4-17, Dec. 1956). However, like the computer manufacturers in the West, China also learned that it needed to educate and substantially expand the number programmers and software developers.
Software Crisis Past
Most commonly, we associate the idea of a software crisis with the period from 1965 to 1975. During this time, the number of computers expanded rapidly and forced companies to rethink how they produced software. Through 1965, software had been the customer’s responsibility. A company would purchase a computer with only minimal software. It might have a cruder operating system, an assembler, and perhaps a compiler for Cobol or Fortran or one of the still-young computer languages. With those tools, customers would develop their own software.
At the time, the business community could easily accept the idea that each customer would be unique. Each company had unique business practices and these practices were reflected in the software. Furthermore, the computing environment was much more diverse than it is today. Each computer model was unique and could not run programs that had been written for other machines. Few vendors sold more than a few thousand of any computer. The most popular computer of this period, the IBM 1401, sold only 12,000 units.
The computing environment changed radically in 1964 with the introduction of the so-called third generation of computers. These machines came with mature operating systems and were simpler to operate than their predecessors. Of these machines, the IBM 360 family was the most influential. It came in five models that could run the same code. The smallest could be used by a family business. The largest could support a big government office or large corporation.
The third-generation computers created a programming crisis because they radically expanded the market for computers and expanded the demand for programmers. The “‘gap in programming support’ that emerged in the 1950s,” wrote historian Nathan Ensmenger, “by the end of the decade was being referred to as a ‘full blown software crisis’” (The Computer Boys Take Over: Computers, Programmers, and the Politics of Technical Expertise, MIT Press, 2010). Software didn’t work. It wasn’t completed on schedule. It cost far more than anticipated.
The computing literature of the time made a great cry to do something about the crisis. It argued that we needed to increase the number and quality of programmers. It suggested that we needed to educate them better, recognize their importance, and give them better pay. The industrialized countries did indeed try some of these ideas. In fact, the foundation for software engineering came directly from this period. However, the real solution to the software crisis came not from better education or more technical knowledge, though both helped. Instead, it came from an approach that no one really expected: the creation of a software industry.
A software industry allowed the development costs to be spread across many customers. It would mean that a single piece of software could generate multiple streams of revenue and have enough profit to hire good programmers and design the system well. It didn’t solve all the problems of the software crisis, but it reduced most of them to an acceptable level of inconvenience. By 1980, most people accepted software as a product and believed that commercial software was better than anything that they could produce.
Software Crisis Present
If we accept that we are in a software crisis, then this crisis is somewhat different from the one in the 1960s. Unlike in the 1960s, we have a mature software industry. We have a strong educational system for developing programmers. We have a mature field of software engineering. However, we also have a much different customer base, which might be an important distinction. This costumer base, according to one commentator, comprises “digital natives” who seek “to take advantage of the opportunities afforded by advances in processing power and increased availability of data.” They were born into a world with technology and demand much from it.
But this group of consumers not only demands much, they can offer much. They are comfortable with technology and know, innately, the basic concepts of data, programming, and programming logic. They have a fearlessness that was unknown to earlier generations. I have witnessed this group among my students. They will confront difficult projects without any concern that they do not understand the tools that they need. In particular, I can recall a student who tackled a problem that involved extracting a dataset of several million observations out of a larger collection. She posted queries on the Internet, looked at code repositories, and step by step learned what she needed to do. In less than a week, she had the data she needed. She was capable of more than we might have assumed because she was a digital native.
Most of the discussion about any current crisis in software followed fairly predictable paths. I see calls for better education programs, more compensation for programmers, more computer classes for students 10 to 16 years of age. However, the real solution will probably come by looking at a software in a new light and perhaps by understanding better that the customer base for software—the digital natives—is part of the software industry. Long ago, we started engaging these customers in the work of debugging and supporting software. For three decades, the open source movement has used these digital natives to create software.
In our age, we are being asked how we engage not only those who are inclined to help work on software but also those that don’t. We are at a point of recognizing that software is not merely a product that comes from a central supplier and goes to a customer base but is actually a tool that organizes a community, a community that can be used to develop and sustain the software. Just as we saw in the creation of the software industry four decades ago in the resolution of an earlier software crisis, we are looking for a simple idea that changes the nature of our software environment. The solution is probably a simple idea that we have already tried but don’t quite understand. That is the nature of crises. They teach us something new about a world that we thought we knew.
About David Alan Grier
David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at firstname.lastname@example.org.