The Community for Technology Leaders
Green Image
<p><it>Abstract</it>—Many parallel algorithms use hypercubes as the communication topology among their processes. When such algorithms are executed on hypercube multicomputers the communication cost is kept minimum since processes can be allocated to processors in such a way that only communication between neighbor processors is required. However, the scalability of hypercube multicomputers is constrained by the fact that the interconnection cost-per-node increases with the total number of nodes. From scalability point of view, meshes and toruses are more interesting classes of interconnection topologies. This paper focuses on the execution of algorithms with hypercube communication topology on multicomputers with mesh or torus interconnection topologies. The proposed approach is based on looking at different embeddings of hypercube graphs onto mesh or torus graphs. The paper concentrates on toruses since an already known embedding, which is called <it>standard embedding</it>, is optimal for meshes. In this paper, an embedding of hypercubes onto toruses of any given dimension is proposed. This novel embedding is called <it>xor embedding</it>. The paper presents a set of performance figures for both the standard and the xor embeddings and shows that the latter outperforms the former for any torus. In addition, it is proven that for a one-dimensional torus (a ring) the xor embedding is optimal in the sense that it minimizes the execution time of a class of parallel algorithms with hypercube topology. This class of algorithms is frequently found in real applications, such as FFT and some class of sorting algorithms.</p>
Graph embeddings, hypercubes, scalable distributed memory multiprocessors, torus multicomputers, mapping of parallel algorithms.

L. Díaz de Cerio, M. Valero-García and A. González, "Executing Algorithms with Hypercube Topology on Torus Multicomputers," in IEEE Transactions on Parallel & Distributed Systems, vol. 6, no. , pp. 803-814, 1995.
98 ms
(Ver 3.3 (11022016))