David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column The Known World. He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at email@example.com.
All Things New
For the past six weeks, I have been adjusting to the life of a former president of the Computer Society. At some level, the transition has been easy and welcome. Sometime, in early January, people simply stopped sending me email. I no longer opened my mailbox to find society members asking for help with their subscription to Computer or the bill for their society dues or the conference that won't accept their paper.
Yet, as I move through this period, I find that I cannot shed some of the business of the society as easily as I thought. Some of the projects that I started last year are not over. A negotiation still needs to be completed. There are responsibilities that I have to complete. These experiences have caused me to reflect on the bigger issues that the Computer Society faces when it needs to put aside a set of activities that have outlived their usefulness and start something new. This is a problem that all professional societies face. As we try to advance the field, we discover that we can't always drop activities that have outlived their usefulness.
For example, during my service in the IEEE's publications division, I became aware that a few of IEEE's transactions were no longer publishing new ideas. The workers in the field had solved most of the major problems in the field so the transactions were publishing minor results, solutions to problems that were of little consequence.
Now one would think that IEEE should recognize the fact that some of the old journals were no longer relevant and stop them. However, this is a harder problem than might appear. The lives and identities of individual researchers are deeply connected to these journals. They fight to keep these journals in publication because these periodicals are their link to the technical community. If the transactions stopped publishing, they would feel as if they had no place in the world of research.
Now, to be fair, the editorial boards of these journals argue that their periodical is still relevant, still has a place in IEEE. To some extent, they are correct. Long after a journal has ceased to publish major results, it still has an important place in training new faculty. Research can improve the quality of faculty, even if the results of the work is not that important. Through doing research problems, faculty learn the details of their field and are better able to transmit their ideas to students.
Yet if a professional society has to keep publishing old journals, how can it adopt to new ideas? This is truly a difficult question that we have been trying to solve. Increasingly, we've been trying to do it by making all of our activities respond to market forces. Publications have to cover their costs. Conferences have to earn enough money to pay their expenses.
We have also been looking for new ways to bring researchers together and to get them thinking about new problems. For example, we've been realizing that there is a need to bring a group of computer architects and software engineers together to think about the future of computing now that the industry is no longer able to follow Moore's Law.
Moore's Law, which was really the collective goal of the semiconductor industry, set a target of doubling the number of gates on a processor chip every 18 months. This target had tremendous power over the computer industry, especially the software industry. It meant that the software firms did not need to design their software for the machines that were available when the code was being written, but for the faster machines that would be available 18 months later. Indeed, many software products performed badly when they were first released, encouraging business managers to upgrade their computers when they bought new software.
However, the industry stopped meeting the targets of Moore's Law sometime in the middle of the first decade of the 21st century. Many observers identify the release of one version of the Pentium 4 Chip, called the Prescott release, as the end of Moore's Law. The chip generated far more heat than the engineers anticipated and so it needed to operate at a slower clock speed.
No longer able to rely on the targets of Moore's Law to increase the speed of their machines, the computer industry has been looking for over ways to increase the speed of computing. The most common approach has been to build multiprocessor machines, the machines that are based on multicore chips. Rather than utilize just a single processor, these machines are able to send code to multiple processors. While this approach does increase the speed of processing without generating excess heat, it does have disadvantages. The operating system for such an architecture requires more overhead. Not all programs can make use of multiple processors. Putting more processors on a chip reduces the amount of memory available to each processor.
To see how computer architecture needs to change, the Computer Society has created a Special Technical Community, a small organization that is designed to work quickly, identify the needs of a new field, and vanish when the work has been done. They started work last year and had a major meeting to talk about the future of computer architecture and its impact upon software. We are calling it Rebooting Computing.
As it has before, the computing industry finds itself at a point where it needs to protect a large investment of software and yet needs to develop a new computer architecture that may not be compatible with that architecture. One of the times when the computer industry seriously considered alternative architectures in order to improve speed occurred in the early 1980s, when the mainframe and the mini-computer were the dominant machines. It is interesting to note that many of the people who are working on Rebooting Computing are the students of the people who worked on the new architectures of the 1980s. At least one member of the team actually was involved in some of those earlier discussions.
I don't have a clear sense of the kind of ideas that our Rebooting Computing group will develop. The new architectures of the 1980s were the vector machines, the Crays, the Fujitus, the Hitachis. These machines had a brief period of dominance before succumbing to Moore's Law. People realized that it was easier to adopt a simple computer architecture and let the semiconductor industry create the faster and smaller processors. The work created a number of useful technologies, including the smart compilers that were able to take advantage of novel architectures. However, the computers developed in this time have generally departed from common use.
The Rebooting Computing group may also exist for only a short time or, perhaps, it may create a journal, start a conference, and find a long-term position in the Computer Society. Then, in 30 or so years, we may see our students, or the students of our students, look at the work of this group and wonder how they can stop it so that they can start something new.