The race toward the fastest computer has become more global than ever, as witnessed by the list of the world’s top 500 supercomputers. Only two years ago, the three top supercomputers were from the US Department of Energy (DoE). The list is updated every six months; over the following year and a half, Chinese computers were ranked 2nd, then 1st. In the most recent list, a Japanese supercomputer came in first, leaving only Jaguar in Oakridge in the top five (ranked third). In addition to performance, sustainability or power efficiency is getting much more attention, and a “green list” is also maintained.
Aside from global competitiveness, the real drivers behind the race are applications that require more powerful hardware to address some key technical problems. Examples include combustion (see one of our feature papers below), extreme materials, and nuclear power. Both the US DARPA Ubiquitous High Performance Computing and the forthcoming US DoE Exascale Computing funding opportunities will help drive next-generation supercomputers. And, down the road, these achievements will also transfer to commodity computers. US national agencies have targeted exascale (1018 flops) as the next big step, and they expect this to be achieved by 2020.
Although amassing enough computers to reach this level of computational power is already possible, doing so within a limited power budget and with sufficient reliability are the key challenges. To put things in perspective, today’s top computer has a peak of 8.8 Pflops, while burning slightly less than 10 megawatts (MW) of power. If we simply scale it to exascale size, it will take us to over 1 gigawatt. The targets that the DoE is putting up are closer to 20MW (PDF). Similar reasoning applies to reliability.
What kind of challenges does exascale computing bring to us computer scientists, and where can the IEEE Computer Society help? We’ll have to revisit some fundamental assumptions, including
- memory, including non-volatile,
- CPU designs,
- power and cooling,
- co-design of systems and applications, and so on.
This month’s theme addresses some of these topics. In “The Reliability Wall for Exascale Supercomputing,”(login required for full text) Xuejun Yang and colleagues highlight the significance of achieving scalable performance using fault tolerance. They quantify the effects of reliability on the scalability of peta/exascale systems by introducing the concepts of reliability speedup and “costup.” They finally show how to mitigate reliability wall effects in system design and hardware software.
In “HyperX: Topology, Routing, and Packaging of Efficient Large-Scale Networks” (login required for full text), Jung Ho Ahn and colleagues introduce an extension of the hypercube and flattened butterfly topologies, the HyperX. It takes advantage of high-radix switch components that integrated photonics will make available. HyperX is a good candidate for exascale architecture because of its performance, packaging, and cost.
In “From Microprocessors to Nanostores: Rethinking Data-Centric Systems,”(login required for full text) Partha Ranganathan describes an active memory and non-volatile random-access memory (NVRAM) approach to addressing large amounts of data. He also introduces data-centric workloads to model and benchmark contemporary and future applications. He strongly advocates reevaluating the implications of data-centric workloads for system architectures. By the exascale era, NVRAMs will be more widely deployed, and new ways of addressing them will become critical.
Mark Giampapa and colleagues present an operating system for supercomputers in “Experiences with a Lightweight Supercomputer Kernel: Lessons Learned from Blue Gene CNK” (login required for full text). In large-scale computing, the performance and reliability impacts of kernels on systems and applications are amplified significantly, hence the introduction of small, low-noise kernels, such as in IBM’s Blue Gene system. The authors demonstrate that such kernels can retain Linux compatibility without losing any of the low-noise and reliability aspects. This will be even more critical in exascale systems.
In “In Situ Visualization for Large Scale Combustion Simulations,” (login required for full text) Hongfeng Yu and colleagues discuss application visualization requirements in exascale systems. The typical approach to offline visualization does not work for huge amounts of data, so new approaches are required that collect data during runs and that can be used either online or offline. This kind of approach enables capturing and understanding some highly intermittent transient phenomena in turbulent combustion.
There are numerous other resources available on this topic; see the Related Resources below for a few to start with.
Dejan Milojicic is the founding editor in chief of Computing Now and a senior research manager at HP Labs.
- S. Borkar, “Major Challenges to Achieve Exascale Performance” (PDF), presentation, Intel Corp., 2009.
- A.-M. Corley, “Imec, Intel See Software as Key to Exascale Computing,” IEEE Spectrum, 23 June 2010.
- G. Gibson, “Scaling Storage into the Exascale Era” (PDF), keynote presentation, 2010 IEEE Int’l Conf. Cluster Computing, 2010.
- R. Stevens and A. White, co-chairs, P. Beckman et al., “A Decadal DOE Plan for Providing Exascale Applications and Technologies for DOE Mission Needs” (PDF), presentation, US Department of Energy, 2010.
- R. Vuduc and K. Czechowski, “What GPU Computing Means for High-End Systems,” IEEE Micro, vol. 31, no. 4, 2011,
pp. 74-78 (login required for full text).
- D.S. Stevenson and R.O. Conn, “Bridging the Interconnection Density Gap for Exascale Computation,” vol. 44, no. 1, 2011, pp. 49-57 (login required for full text).