Jack J. Dongarra

2003 Sidney Fernbach Award Recipient

"For outstanding and sustained contributions to the area of mathematical software, most particularly in the areas of communication and numerical libraries and performance benchmarks for high performance computing"

Jack Dongarra


Jack Dongarra is a University Distinguished Professor of Computer Science in the Department of Computer Science, Director of the Center for Information Technology Research, and the Director of the Innovative Computing Laboratory at the University of Tennessee. He is also a Distinguished R&D staff member of the Oak Ridge National Laboratory, and an Adjunct Professor in the Computer Science Department of Rice University. Prior to coming to the University of Tennessee he was the Scientific Director of the Advanced Computing Research Facility at Argonne National Laboratory. Jack is a Fellow of the AAAS, ACM and IEEE, a member of the National Academy of Engineering.

Jack Dongarra has been a leader in the design and development of high performance mathematical software for the past 20 years, especially linear algebra libraries for high performance computing environments. His research on the implementation of linear algebra algorithms for high performance computing architectures has defined the field. His leadership in promoting standards for mathematical software has led to the development of the major software libraries that are most commonly used in HPC. Jack made significant contributions, often in a leadership role, to LINPACK, LAPACK, ScaLAPACK the BLAS and other libraries. By developing and supporting na-net and netlib, Jack had a tremendous impact of building an online community of numerical algorithms developers long before the arrival of the world wide web.  His outstanding work in numerical and communication libraries as well as his other research efforts has earned him membership in the National Academy of Engineering. He is among the top 10 most sited researchers in all of computer science and has won 4 R&D 100 awards for his technological innovations.

For 2 decades the libraries to which Jack contributed have represented the state-of-the-art with respect to algorithms and methods that take advantage of the underlying architecture to obtain near-optimal performance. Many supercomputer vendors, such as IBM, Cray, SGI, HP, Fujitsu, NEC, and Hitachi, have adopted these software packages as the basis of their own numerical libraries. This software involves the innovative use of memory hierarchies, the use of parameters for performance tuning, the use of novel numerical algorithms and comprehensive error bounds, and other techniques. to achieve performance and portability. The libraries are based in fundamental research and are well engineered, tested, and documented. They represent a standard by which all other mathematical packages are currently measured.

In additional to his work with respect to numerical libraries, Dongarra has been a major driver in the creation of de facto standards (PVM and MPI) that have been accepted widely in computer and computational science. PVM provided the first serious infrastructure for virtualizing collections of computers and is the first modern system for distributed parallel computation. It has been adopted all over the world for technical and educational purposes.  Emerging from the PVM experience, MPI has become a community standard for communication libraries for parallel computing and represents the stateof-the-art. These efforts represent the basic building blocks critical for high performance and portability among high performance computers.

Dongarra has advanced high performance computing through his work in developing tools for performance measurement. In particular, PAPI (Performance Application Programming Interface) enables developers to tune their software for optimal performance. PAPI provides a reusable, portable and functionality-oriented foundation for performance tool design and has been widely adopted by developers to help increase the efficiency and performance of their software. ATLAS (Automatically Tuned Linear Algebra Software) has transformed the way in which such kernel libraries are created and administered. By replacing hand tuning with intelligent, self-adapting software, ATLAS promotes both portability and efficiency — code that can exploit the speed the underlying hardware is capable of delivering. Complementing his deep work in software tools, his publication of the LINPACK list and later his support of the TOP500 list have shaped the field of HPC performance evaluation, and the study of market trends in HPC.
Computing Now