Applications with large computational requirements and data-intensive applications are rapidly evolving in many scientific domains. For this reason, parallel computing is gaining attention and is an area of intense study. Different types of parallel systems are available to users. Along with the traditional parallel-processor systems, various systems have recently appeared, such as workstation clusters and computational grids. They all attempt to provide efficient service for applications with particular demands. Therefore, the deployment and portability of efficient parallel algorithms is crucial.

*Introduction to Parallel Computing *deals not only with common parallel-processing problems but also with issues that have emerged in high-performance computing. The book also presents cases of algorithms that perform efficiently on traditional parallel computers as well as on more recent parallel architectures. Programming standards the book uses are Message Passing Interface (MPI), Posix ( *P*ortable *O*perating *S*ystem *I*nterface for Uni *x*) threads, and open specifications for multiprocessing.

Over 13 chapters, the book presents parallel-computing topics such as parallel architectures, designing and analyzing parallel algorithms, and programming techniques. Chapter 1 introduces parallel computing, and the author suggests that the remaining chapters fall into four sections: fundamentals, parallel programming, nonnumerical algorithms, and numerical algorithms.

The first section, fundamentals, contains Chapters 2 through 5. Chapter 2 describes parallel-programming platforms. The authors identify architectural characteristics required for writing programs on different kinds of platforms. Chapter 3 describes the principles of parallel algorithm design and implementation for efficient parallel algorithms. Chapter 4 presents various communication operations and expressions in terms of their time complexity. Parallel algorithms' performance on parallel platforms largely depends on the types of communication operations you employ. Chapter 5 covers metrics for quantifying parallel algorithms' performance.

The second section, parallel programming, contains Chapters 6 and 7. Chapter 6 covers programming with the message-passing paradigm and the MPI, while Chapter 7 presents programming paradigms for shared-address-space parallel systems.

The third section, nonnumerical algorithms, contains Chapters 9 through 12. Chapter 9 covers sorting, presenting parallel sorting algorithms for parallel random-access machine, mesh, hypercube, and general shared-address-space and message-passing architectures. Chapter 10 presents parallel formulations for various graph algorithms, such as minimum-spanning tree, shortest paths, and connected components, as well as sparse graph algorithms. Chapter 11 describes search algorithms for discrete optimization problems, and Chapter 12 identifies dynamic programming algorithms.

The final section, numerical algorithms, contains Chapters 8 and 13. Chapter 8 identifies some basic operations on dense matrices. This chapter comes before the nonnumerical algorithms section because some of its techniques are common to many nonnumerical algorithms. Chapter 13 discusses algorithms for computing fast Fourier transforms.

The book concludes with an appendix that provides fundamentals on the complexity of functions and order analysis used in analyzing algorithms' performance.

The authors recommend the book for either a single, concentrated course on parallel computing or a two-part course. As a two-part course, the first part, "Introduction to Parallel Computing," would consist of Chapters 1 through 6 and cover the basics of algorithm design and parallel programming. The second part, Design and Analysis of Parallel Algorithms, would consist of Chapters 2, 3, and 8 through 12 and cover more extensively the design and analysis of various parallel algorithms.

The book applies to a broad audience and is an excellent resource for teaching various courses related to parallel algorithms and computing. Any reader interested in parallel-computing topics will profit from reading it.

All of the topics in *Introduction to Parallel Computing *are logically organized and clearly presented. The authors manage to thoroughly cover both the fundamental and advanced aspects of parallel computing, with each chapter contributing interesting perspectives to the book's overall purpose. The authors' expertise in parallel computing is reflected throughout.

Among the book's many strengths is that it contains numerous examples to support theoretical concepts. These examples make the book practical and more understandable. The problems provided at the end of each chapter help you gain direct experience in designing and implementing efficient parallel algorithms. In addition, the book provides useful hints to clarify the more difficult problems.

Other strengths include bibliographic remarks that nicely summarize the accomplishments of efforts related to each chapter, excellent figures that make the material easy to learn, and a comprehensive index and references. (Indexed terms appear in bold italics in the text, making them easy to locate.)

The procedures describing the algorithms are particularly comprehensive and concise, helping you understand even the most difficult algorithms. Particularly useful are the examples with code in Chapters 6 and 7 on message-passing and the shared-address-space programming paradigms.

All chapters are well-written and important to the demanding parallel-computing reader. I particularly like Chapters 3 and 4 because they cover their two basic topicsâ€”designing and implementing parallel algorithms and implementing communication operationsâ€”in a straightforward and comprehensive way.

*Introduction to Parallel Computing *is one of the better books I've read, and it makes strong contributions to the field. The authors provide the reader with a wide range of knowledge relating to parallel-computing principles and algorithms. Furthermore, you also learn how to use this knowledge in practice for parallel-algorithm design and implementation. It's an excellent resource for graduate and senior-level undergraduate students, as well as professionals interested in parallel computing.

Helen D. Karatza is an associate professor in the Department of Informatics at the Aristotle University of Thessaloniki, Greece. Contact her at karatza@csd.auth.gr.