High-Performance Computing: Global and Local
By David Alan Grier

Connected nodes in a networkLong before we had high-performance computing, we had high-performance computing centers. In spring 1904—almost half a century before the advent of the electronic computer—astronomer Simon Newcomb proposed a “Center for the Exact Sciences.” This center would do a variety of things to support scientific research, but one of its central roles was to provide computing services to scientific research. As he wrote, such a center should support “the development of mathematical methods and their application to a great mass of existing observations.”

A full century after Newcomb, high-performance computing centers have become common. They are often centerpieces of computing, occupy gleaming new buildings, and display their computing machines with pride. But in some ways, the concept of high-performance computing is fairly new. The term came into common use during the 1980s when the IEEE and the ACM jointly founded a conference on the subject. Before the 1980s, we had built fast computers, but we hadn’t always considered them to be a special subdivision of computing technology. So, if we really want to understand the nature of high-performance computing, we need to start by understanding the origin of the computing center, for these centers have defined the nature of high-performance computing and how it interacts with the world.

A high-performance computing center has three basic divisions. First, it has a unit that can do calculations. Second, it has a programming division that can prepare calculations for the computing unit. Finally, it has a management division that runs the entire organization, raises funds, and chooses the calculations that will be handled by the computing division.

Traditionally, we have identified these centers with their first division, the computing division. We determine how many calculations that division can do, the number of floating-point operations that can be completed in a single second. We then rank the computing centers based on that speed. Yet, the other two divisions are equally important because they maintain a body of knowledge about computing and determine how the public perceives high-performance computing.

 

Related:

Does Insurance Have a Future in Governing Cybersecurity?

Human Behavior Aware Energy Management in Residential Cyber-Physical Systems

Machine Learning Systems and Intelligent Applications

Developing Children’s Regulation of Learning in Problem-Solving With a Serious Game

 

We can appreciate why we need to understand all the elements of a computing center by looking at one of the earliest such centers, Vannevar Bush’s Center of Analysis. The Center of Analysis was part of the Massachusetts Institute of Technology. It provided computing services to the electrical industry, the electronic industry, and the US military. It helped the electrical utilities design large-scale transmission networks, and helped electronic component manufacturers model vacuum tubes. During the Second World War, it did ballistics calculations for the US Army. In fact, it was the most powerful computing center in operation during the war and did many of the calculations that we usually associate with the ENIAC computer, a machine that was not operational before the end of the war.

The Center of Analysis vanished during the 1950s. Most commentators claim that it closed because it failed to embrace the new digital computing technology. In fact, such a claim is shortsighted. Although the Center of Analysis used an analog computing machine called the Differential Analyzer, it ceased operations long before digital computation became common. On closer examination, the Center of Analysis failed not only because it clung to analog technology, but also because its other two divisions, the programming division and the management division, failed to adopt to the demands of the 1950s.

The programming division of the Center of Analysis consisted primarily of students in MIT’s departments of electrical and mechanical engineering. In all, this group produced nearly 50 papers that utilized the Differential Analyzer. To them, the Differential Analyzer was a machine that could model certain kinds of differential equations. Hence, they developed the skill of taking abstract mathematical calculations and mapping them onto machines, onto the motions of gears, disks, and shafts. They did nothing to organize or develop the field of numerical analysis, the field that became central to scientific work during the 1950s. Hence, as universities started acquiring digital computers, they had no need to turn to the Center of Analysis for assistance with numerical problems.

Equally, the management division of the Center of Analysis quickly fell out of steps with the times. Initially, it raised funds from charitable groups such as the Rockefeller Foundation. It also described its organization as an institution that served industry. However, by the late 1940s, both of these ideas were out-of-date. The primary source of funds for computation was the US government, and the principal clients for computing services were military organizations: the Army, the Navy, and the Atomic Energy Commission.

A quick look at the Computing Center of the University of Illinois shows how these centers developed during the 1950s, 1960s, and 1970s. During this period, the centers focused on high-speed electronics, developed programming skills in numerical analysis, and connected their work to National Defense. The University of Illinois built four high-speed computers between 1950 and 1975: the ILLIAC I (1952), the ILLIAC II (1958), the ILLIAC III (1966), and the ILLIAC IV (1970). All were supported by the military or the Atomic Energy Commission. All were applied to military problems. The programming division of the school first developed skill in numerical analysis. However, beginning with the ILLIAC III, it started developing skill in parallel programming. To increase its speed, the ILLIAC III did some operations in parallel. The ILLIAC IV expanded the concept of parallelism and explored many ideas that are now common on massively parallel high-performance computers. Indeed, the school became a center for the study of parallelism.

The field of high-performance computing shifted in the mid-1980s. In 1984, the American Congress created four high-performance computing centers. Unlike earlier computing projects, such as those at the University of Illinois, these centers were not intended to be solely for military use. Congress required that each of these centers provide services to local researchers and local industry. Furthermore, each would have to raise its own operating funds.

The four centers, chosen through a competitive process, were located in Ithaca, New York; Pittsburgh, Pennsylvania; Champaign-Urbana, Illinois; and San Diego, California. They began operation in 1985 and started to build the identity that has marked high-performance computing to this day. These centers had the three basic elements of a high-performance computing organization. First, they had a computing division with a high-performance computer massively parallel computer, a machine that could not be purchased by a single company or university. The return on investment was simply too low for any private organization to afford.

Second, to support the computing division, they would have a planning office that developed an expertise in parallelism and other programming techniques for high-performance machines. They would not have to worry about programming or numerical analysis skills, because these could be commonly found. However, they had to develop and teach the skills of using the machine efficiently, as few people would know them.

Finally, all these centers had management divisions that were concerned with providing services not only to government offices but also to local businesses and researchers. This office represented the most innovative change in high-performance computing during this time. High-performance computing centers had to turn their attention to local organizations. They had to find organizations that could use their services and help finance operations. They could no longer simply rely on the military or national organizations for funds.

Shortly after these four centers began operations, the IEEE and ACM created a high-performance computing conference called “SC,” for Supercomputing Conference. The conference supports the high-performance computing industry and maintains the current identity of high-performance computing, an identity that is based on a complex and expensive high-performance machine; a planning unit that has an expertise in parallelism and the techniques for programming high-performance computing; and a management office that looks to global concerns, national security issues, and local industry needs.

In many ways, our current sense of high-performance computing grew out of the competition between the US and Japan. In the early 1980s, each country was trying to build the world’s fastest computers. The 1983 book, Fifth Generation, by Stanford professor Ed Feigenbaum, described the state of the competition and suggested that Japan had the stronger program. However, instead of creating a permanent competition between the two countries, it shifted the nature of high-performance computing from the global to the local, from supporting military and national security projects to the work of sustaining local research and industry. That is the lesson that we learn from studying supercomputer centers.

 


David Alan Grier circle image

About David Alan Grier

David Alan Grier is a writer and scholar on computing technologies and was President of the IEEE Computer Society in 2013. He writes for Computer magazine. You can find videos of his writings at video.dagrier.net. He has served as editor in chief of IEEE Annals of the History of Computing, as chair of the Magazine Operations Committee and as an editorial board member of Computer. Grier formerly wrote the monthly column “The Known World.” He is an associate professor of science and technology policy at George Washington University in Washington, DC, with a particular interest in policy regarding digital technology and professional societies. He can be reached at grier@computer.org.