Issue No. 12 - Dec. (2012 vol. 61)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TC.2012.132
Ardavan Pedram , The University of Texas at Austin, Austin
Robert A. van de Geijn , The University of Texas at Austin, Austin
Andreas Gerstlauer , The University of Texas at Austin, Austin
As technology is reaching physical limits, reducing power consumption is a key issue on our path to sustained performance. In this paper, we study fundamental tradeoffs and limits in efficiency (as measured in energy per operation) that can be achieved for an important class of kernels, namely the level-3 Basic Linear Algebra Subprograms (BLAS). It is well-accepted that specialization is the key to efficiency. This paper establishes a baseline by studying GEneral Matrix-matrix Multiplication (GEMM) on a variety of custom and general-purpose CPU and GPU architectures. Our analysis shows that orders of magnitude improvements in efficiency are possible with relatively simple customizations and fine-tuning of memory hierarchy configurations. We argue that these customizations can be generalized to perform other representative linear algebra operations. In addition to exposing the sources of inefficiencies in current CPUs and GPUs, our results show our prototype Linear Algebra Processor (LAP) implementing Double-precision GEMM (DGEMM) can achieve 600 GFLOPS while consuming less than 25 Watts in standard 45 nm technology, which is up to 50\times more energy efficient than cutting-edge CPUs.
Bandwidth, System-on-a-chip, Linear algebra, Algorithm design and analysis, Field programmable gate arrays, Memory management, Energy efficiency, Energy management, Low power electronics, special-purpose hardware, Low-power design, energy-aware systems, performance analysis and design aids, matrix multiplication, memory hierarchy, level-3 BLAS
A. Gerstlauer, R. A. van de Geijn and A. Pedram, "Codesign Tradeoffs for High-Performance, Low-Power Linear Algebra Architectures," in IEEE Transactions on Computers, vol. 61, no. , pp. 1724-1736, 2012.