The Community for Technology Leaders
Green Image
<p>We propose a method for evaluating and comparing the fault tolerance of a wide variety of parallel distributed processing networks (more commonly referred to as artificial neural networks). Despite the fact that these computing networks are biologically inspired and share many features of biological neural networks, they are not inherently tolerant of the loss of processing elements. We examine two classes of networks, multilayer perceptrons and Gaussian radial basis function networks, and show that there is a marked difference in their operational fault tolerance. Furthermore, we show that fault tolerance is influenced by the training algorithm used and even the initial state of the network. Using an idea due to Sequin and Clay (1990), we show that training with intermittent, randomly selected faults can dramatically enhance the fault tolerance of radial basis function networks, while it yields only marginal improvement when used with multilayer perceptrons.</p>
feedforward neural nets; fault tolerant computing; backpropagation; fault tolerance; parallel distributed processing networks; artificial neural networks; computing networks; biological neural networks; multilayer perceptrons; Gaussian radial basis function networks; training algorithm; backpropagation; function approximation; neural networks; robustness.

M. Carter and B. Segee, "Comparative Fault Tolerance of Parallel Distributed Processing Networks," in IEEE Transactions on Computers, vol. 43, no. , pp. 1323-1329, 1994.
92 ms
(Ver 3.3 (11022016))