The Community for Technology Leaders
Neural Networks, IEEE - INNS - ENNS International Joint Conference on (2000)
Como, Italy
July 24, 2000 to July 27, 2000
ISSN: 1098-7576
ISBN: 0-7695-0619-4
pp: 2167
Eiji Mizutani , University of California at Berkeley and Sony US Research Laboratories
Stuart E. Dreyfus , University of California at Berkeley
Kenichi Nishio , Sony Corporation Personal IT Network Company
ABSTRACT
The well-known backpropagation (BP) derivative computation process for multilayer perceptrons (MLP) learning can be viewed as a simplified version of the Kelley-Bryson gradient formula in the classical discrete-time optimal control theory [1]. We detail the derivation in the spirit of dynamic programming, showing how they can serve to implement more elaborate learning whereby teacher signals can be presented to any nodes at any hidden layers, as well as at the terminal output layer. We illustrate such an elaborate training scheme using a small-scale industrial problem as a concrete example, in which some hidden nodes are taught to produce specified target values. In this context, part of the hidden layer is no longer “hidden”.
INDEX TERMS
CITATION

E. Mizutani, S. E. Dreyfus and K. Nishio, "On Derivation of MLP Backpropagation from the Kelley-Bryson Optimal-Control Gradient Formula and Its Application," Neural Networks, IEEE - INNS - ENNS International Joint Conference on(IJCNN), Como, Italy, 2000, pp. 2167.
doi:10.1109/IJCNN.2000.857892
88 ms
(Ver 3.3 (11022016))