Issue No. 08 - August (1996 vol. 7)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/71.532114
<p><b>Abstract</b>—We introduce a computation model for developing and analyzing parallel algorithms on distributed memory machines. The model allows the design of algorithms using a single address space and does not assume any particular interconnection topology. We capture performance by incorporating a cost measure for interprocessor communication induced by remote memory accesses. The cost measure includes parameters reflecting memory latency, communication bandwidth, and spatial locality. Our model allows the initial placement of the input data and pipelined prefetching.</p><p>We use our model to develop parallel algorithms for various data rearrangement problems, load balancing, sorting, FFT, and matrix multiplication. We show that most of these algorithms achieve optimal or near optimal communication complexity while simultaneously guaranteeing an optimal speed-up in computational complexity. Ongoing experimental work in testing and evaluating these algorithms has thus far shown very promising results.</p>
Parallel algorithms, parallel model, personalized communication, broadcasting, load balancing, sorting, Fast Fourier Transform, matrix multiplication.
J. F. JáJá and K. W. Ryu, "The Block Distributed Memory Model," in IEEE Transactions on Parallel & Distributed Systems, vol. 7, no. , pp. 830-840, 1996.