Issue No. 01 - January (2005 vol. 16)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPDS.2005.11
<p><b>Abstract</b>—This paper presents an efficient mapping scheme for the multilayer perceptron (MLP) network trained using back-propagation (BP) algorithm on network of workstations (NOWs). Hybrid partitioning (HP) scheme is used to partition the network and each partition is mapped on to processors in NOWs. We derive the processing time and memory space required to implement the parallel BP algorithm in NOWs. The performance parameters like speed-up and space reduction factor are evaluated for the HP scheme and it is compared with earlier work involving vertical partitioning (VP) scheme for mapping the MLP on NOWs. The performance of the HP scheme is evaluated by solving optical character recognition (OCR) problem in a network of ALPHA machines. The analytical and experimental performance shows that the proposed parallel algorithm has better speed-up, less communication time, and better space reduction factor than the earlier algorithm. This paper also presents a simple and efficient static mapping scheme on heterogeneous system. Using divisible load scheduling theory, a closed-form expression for number of neurons assigned to each processor in the NOW is obtained. Analytical and experimental results for static mapping problem on NOWs are also presented.</p>
Multilayer perceptron, back-propagation, network of workstation, optical character recognition, performance measures, divisible load theory.
S. Omkar, S. Suresh and V. Mani, "Parallel Implementation of Back-Propagation Algorithm in Networks of Workstations," in IEEE Transactions on Parallel & Distributed Systems, vol. 16, no. , pp. 24-34, 2005.