The Community for Technology Leaders
Green Image
<p>Neural computation means organizing processing into a number of processing elements that are massively interconnected and that exchange signals. Processing within elements usually involves adding weighted input values, applying a (non)linear function to the input sum, and forwarding the result to other elements. Since the basic principle of neurocomputation is learning by example, such processing must be repeated again and again, with weights being changed until a network learns the problem. An artificial neural network can be implemented as a simulation programmed on a general-purpose computer or as an emulation realized on special-purpose hardware. Although sequential simulations are widespread and offer comfortable software environments for developing and analyzing neural networks, the computational needs of realistic applications exceed the capabilities of sequential computers. Parallelization was therefore necessary to cope with the high computational and communication demands of neuro-applications. As matrix-vector operations are at the core of many neuroalgorithms, processing is often organized in such a way as to ensure their efficient implementation. The first implementations were exercised on general-purpose parallel machines. When they approached the performance limits of standard supercomputers, the research focus shifted to architectural improvements. One approach was to build general-purpose programmable neurohardware; another was to construct special-purpose neurohardware that emulates a particular neuromodel. This article discusses techniques and means for parallelizing neurosimulations, both at a high programming level and at a low hardware-emulation level. </p>

N. B. Serbedzija, "Simulating Artificial Neural Networks on Parallel Architectures," in Computer, vol. 29, no. , pp. 56-63, 1996.
91 ms
(Ver 3.3 (11022016))