Parallel Computing in Electrical Engineering, 2004. International Conference on (2006)
Sept. 13, 2006 to Sept. 17, 2006
Wojciech Kwedlo , Bialystok Technical University, Poland
Krzysztof Bandurski , Bialystok Technical University, Poland
In the paper the problem of using a differential evolution algorithm for feed-forward neural network training is considered. A new parallelization scheme for the computation of the fitness function is proposed. This scheme is based on data decomposition. Both the learning set and the population of the evolutionary algorithm are distributed among processors. The processors form a pipeline using the ring topology. In a single step each processor computes the local fitness of its current subpopulation while sending the previous subpopulation to the successor and receiving next subpopulation from the predecessor. Thus it is possible to overlap communication and computation using non-blocking MPI routines. Our approach was applied to several classification and regression learning problems. The scalability of the algorithm was measured on a compute cluster consisting of sixteen two-processor servers connected by a fast Infiniband interconnect. The results of initial experiments show that for large datasets the algorithm is capable of obtaining very good, near linear speedup.
W. Kwedlo and K. Bandurski, "A Parallel Differential Evolution Algorithm A Parallel Differential Evolution Algorithm," International Symposium on Parallel Computing in Electrical Engineering(PARELEC), Bialystok, 2006, pp. 319-324.