The Community for Technology Leaders
Green Image
<p>This correspondence expands the available theoretical framework that establishes a link between discriminant analysis and adaptive feed-forward layered linear-output networks used as mean-square classifiers. This has the advantages of providing more theoretical justification for the use of these nets in pattern classification and gaining a better insight into their behavior and about their use. The authors prove that, under reasonable assumptions, minimizing the mean-square error at the network output is equivalent to minimizing the following: 1) the difference between the optimum value of a familiar discriminant criterion and the value of this criterion evaluated in the space spanned 2) the outputs of the final hidden layer, and 3) the difference between the values of the same discriminant criterion evaluated in desired-output and actual-output subspaces. The authors also illustrate, under specific constraints, how to solve the following problem: given a feature extraction criterion, how the target coding scheme can be selected such that this criterion is maximized at the output of the network final hidden layer. Other properties for these networks are explored.</p>
pattern recognition; feedforward neural nets; feature extraction; Bayes methods; discriminatory power; adaptive feedforward layered networks; discriminant analysis; linear-output networks; mean-square classifiers; pattern classification; mean-square error minimisation; familiar discriminant criterion; final hidden layer; feature extraction criterion; target coding scheme

H. Osman and M. Fahmy, "On the Discriminatory Power of Adaptive Feed-Forward Layered Networks," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 16, no. , pp. 837-842, 1994.
98 ms
(Ver 3.3 (11022016))