, Gridalogy and the University of Illinois at Urbana-Champaign
Tools and Environments for Parallel and Distributed Computing is a good collection of distributed computing articles by various authors, including the editors themselves. It covers approaches to parallel computing, the art of constructing hardware and software to let multiple processors function together. It presents a survey of the most effective tools in use and case studies of particularly successful implementations.
Parallel and distributed computing have touched all areas of computing from processing to networks and software. The book delves into designing, developing, and deploying many versatile computing environments that can respond to the ever-growing need for crunching data. It also serves as a comprehensive survey of successful methodologies, with an eye toward the past and the future. Another of the book's impressive features is its examples, comparisons, and analyses of the environments and tools catalogued in the chapters.
Chapter 1 gives a basic introduction to parallel and distributed computing concepts and implementation technologies. It's obvious at the start that the book isn't for novices because this chapter uses advanced concepts.
Chapter 2 covers message-passing. It gives a somewhat detailed theory of the message-passing model. The chapter's most interesting feature is its description of individual models, such as p4, socket-based message passing, and PVM (parallel virtual machine). These technologies have become integral to distributed computing. The authors also compare the performance of different systems, protocols, and tools that can be useful when deciding which of these to use.
Chapter 3 outlines the properties and features of distributed-shared-memory tools. Again, the authors describe and compare the different hardware-based DSM systems. Although this chapter isn't as detailed as the other chapters, it provides good insight into DSM.
Chapter 4, on the distributed-object computing tools, is well written, not because it has many code snippets (which it does) but because it actually pins the theory down with simple algorithms. Ping, concurrency, and numerical computation experiments compare the performance of the Java RMI (remote method invocation), CORBA (Common Object Request Broker), and DCOM (distributed component object model) distributed-object technologies. The chapter also presents an experimental evaluation of these approaches and compares them with respect to languages, platform dependency, implementation, and performance. It also talks about the Unified Metaobject Model, which could be the next big player in the arena.
Chapter 5 tries to thoroughly cover grid computing, but the topic's nature makes capturing its essence in a few pages impossible. For example, the book talks about different available grid technologies and applications, but the details don't do them justice. For grid computing, readers should refer to other texts, such as Grid Computing: A Practical Guide to Technology and Applications, by Ahmar Abbas (Charles River Media, 2004).
The book's last chapter describes software development for high-performance computing, emphasizing the importance of the tools and environments for such efforts. It lists some shortcomings of HPC and gives a stage-by-stage description of the development process. Software engineers will appreciate the authors' explanation of the various stages.
Because Tools and Environments for Parallel and Distributed Computing examines many techniques, it's is a good reference for anyone designing a new parallel or distributed system that emphasizes performance. The various case studies provide valuable insight into some of the most intriguing issues facing a growing industry, including designing efficient algorithms.
Cite this article: Milan Lathia, "A Useful Resource for Parallel and Distributed Computing," IEEE Distributed Systems Online, vol. 6, no. 4, 2005.