It is tough to do anything with other people during the summer. My academic colleagues are spending much of the warm months doing things that they cannot get done during the academic year. They are visiting other labs, going to conferences, or merely trying to draft a paper and refusing to answer emails. At the end of August, when I was trying to assemble a delegation to meet with a visiting group of Computer Scientists from Beijing’s Ministry of Posts and Telecommunications University, it was impossible to find a time when we could all meet. I had to settle for a simpler solution, getting a few of them to meet the group one at a time.
The problem of getting a group of busy professionals to find a single time to meet is hard enough but it also seems to reflect the problems that we are trying to address with our new forms of parallel machines. I have been following this effort, which focuses on the use of Graphical Processing Units, or GPUs, to do work in parallel.
We are investigating general applications for these processors because we have encountered a difficult physical barrier with our efforts to scale computers ever smaller and smaller. Sometime in 2004, we began to see the signs that we could no longer shrink the size of the standard microprocessor architecture and increase the speed of the clock without creating difficult operational problems. The small chips would be too hot to cool easily with cost-effective methods. So rather than shrink the size of the chips, we looked for ways of doing computing in parallel.
I became interested in the new GPUs for two reasons, one that reflects my personal history. The other looks to the future of the field of software development. I first learned about the new GPUs when a patent attorney called me for some advice. He worked for a chip designer that was claiming that the new GPUs were using an idea that he had developed and patented in the 1990s. Was not the GPU, the attorney asked, merely updated versions of the Illiac IV and the Burroughs Scientific Processor, machines on which I had worked during the first years of my career?
As I was not actively working on computer architecture at the time, I had to do some quick research. I found that indeed, the GPUs were indeed quite similar to those machines from my youth. Those two machines, the Illiac IV and the Burroughs Scientific Processor, were Single Instruction Multiple Data stream, or SIMD, processors. The modern GPUs are Single Instruction Multiple Thread or SIMT processors.
The differences between the two approaches relatively straightforward. The SIMT elements are complete processors with complete states while the SIMD processors were simpler devices. However, the impact of the two architectures are quite similar.
Both SIMD and GPU SIMT devices require programmers to think about algorithms in ways that are substantially different from the methods that we use to program serial machines. While I was learning to program SIMD machines, I attended many a seminar in which the instructor pushed and prodded us to “Think Parallel.”
As I came to understand how to program these devices, I concluded that it was not particularly difficult to think in parallel. In fact, some problems naturally fit the parallel model. Yet, I soon discovered that parallel machines are quite challenging to program efficiently. They can run very fast if they can simply march through memory without making any decisions or jump to other parts of the dataset. However, if they have to make a jump and restart a section of code, they can become frustratingly slow.
The challenge of writing efficient parallel code leads to my second reason for being interested in the new GPUs: the problem of making new computational methods available to a large group of workers. Time and again, we have developed new ideas that allow computers to solve useful but difficult problems. To make those methods available to a large class of workers, we have had to hide the complexities behind software tools and simplifying concepts. While these tools make the new methods widely available, they also change the nature of the global programming community. They appeal to people with certain sets of skills while making some old skills obsolete.
For example, in the early 1960s, the first programming languages made computers accessible to people who had little or no knowledge of the underlying digital electronics. In a short period of time, they allowed the world’s community of programmers to grow quickly. At the same time, they eliminated many of the early programmers, who indeed understood digital circuits.
More recently, the tools for Internet programming, including the various stack solutions, languages such as Ruby and Python, and the architecture of APIs have also expanded the pool of people who can use the network in a sophisticated way. As a longtime observer of computation, I find some of these tools awkward and inelegant. At the same time, I have to acknowledge that they have brought a new class of programmers into computing. This class knows little about network programming but they understand deeply and perhaps intuitively, how distributed systems should work.
We still have a long way to go before we make the parallelism of GPUs and SIMT processors available to a large population of programmers. We made a certain amount of progress towards developing tools for parallel programming in the 1980s. We are in the process of reinventing those tools in light of the advances we have seen since that time. We are also adding new ideas that should make these processors available to a wider class of applications.
I’m not quite in a position to say how these tools will be organized the kinds of programmers that they will attract to the process. All I can say is that this is a situation that we have seen before. We have a new technology with new promise, even though it may have been based on ideas that we have seen in the past. We will make this successful only if we can make it applicable to a wide class of problems and only if we can make it useful to a new group of programmers.
These programmers will likely take computing in a new direction, for it seems likely that they will naturally think in parallel in situations where the rest of us would plod forward one step at a time.
About David Alan Grier
David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.