The Community for Technology Leaders
Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (1994)
Pacific Grove, CA, USA
Oct. 31, 1994 to Nov. 2, 1994
ISSN: 1058-6393
ISBN: 0-8186-6405-3
pp: 879-886
J.J. Shynk , Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
N.J. Bershad , Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
ABSTRACT
The convergence behavior of perceptron learning algorithms has been difficult to analyze because of their inherent nonlinearity and the lack of an appropriate model for the training signals. In many cases, extensive computer simulations have been the only way of quantifying their performance. Previously we introduced a stochastic convergence model based on a system identification formulation of the training data that allows one to derive closed-form expressions for the stationary points and cost functions, as well as deterministic recursions for the transient learning behavior. We provide an overview of this approach and describe how it is applied to single- and two-layer perceptron configurations.<>
INDEX TERMS
multilayer perceptrons, feedforward neural nets, identification, learning (artificial intelligence), stochastic processes, Gaussian processes, transient analysis, convergence
CITATION

J. Shynk and N. Bershad, "On the system identification convergence model for perceptron learning algorithms," Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers(ACSSC), Pacific Grove, CA, USA, 1995, pp. 879-886.
doi:10.1109/ACSSC.1994.471587
87 ms
(Ver 3.3 (11022016))