The Community for Technology Leaders
Cluster Computing and the Grid, IEEE International Symposium on (2008)
May 19, 2008 to May 22, 2008
ISBN: 978-0-7695-3156-4
pp: 90-97
ABSTRACT
In this paper, we study the impact of multi processor memory systems in particular, the distributed memory??(DM) and virtual shared memory (VSM), on the implementation of parallel back propagation neural network algorithms. In the first instance, neural network is partitioned into sub neural networks by applying a hybrid partitioning scheme. In the second, each partitioned network is evaluated with matrix multiplication. Three different sizes of neural networks are used and exchange rate prediction used as a reference problem. Parallel implementations for each ofthe distributed memory and virtual shared memory scenarios is obtained. These algorithms are implemented on a high performance cluster, "Monolith" consisting of over 396 nodes. Programming is realized using Message Passing Interface (MPI) library and C-Linda. The partitioned, matrix multiplication has the fastest execution time, and DM/MPI implementation is always faster than the VSM/Linda equivalent. However in VSM/Linda it is possible to allow the parallel neural network to choose the optimum number ofprocessors dynamically.
INDEX TERMS
Parallel neural network, Hybrid partition, DM, VSM
CITATION

K. Ganeshamoorthy and D. N. Ranasinghe, "On the Performance of Parallel Neural Network Implementations on Distributed Memory Architectures," 2008 8th International Symposium on Cluster Computing and the Grid (CCGRID '08)(CCGRID), Lyon, 2008, pp. 90-97.
doi:10.1109/CCGRID.2008.68
85 ms
(Ver 3.3 (11022016))