Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers (1994)
Pacific Grove, CA, USA
Oct. 31, 1994 to Nov. 2, 1994
J.J. Shynk , Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
N.J. Bershad , Dept. of Electr. & Comput. Eng., California Univ., Santa Barbara, CA, USA
The convergence behavior of perceptron learning algorithms has been difficult to analyze because of their inherent nonlinearity and the lack of an appropriate model for the training signals. In many cases, extensive computer simulations have been the only way of quantifying their performance. Previously we introduced a stochastic convergence model based on a system identification formulation of the training data that allows one to derive closed-form expressions for the stationary points and cost functions, as well as deterministic recursions for the transient learning behavior. We provide an overview of this approach and describe how it is applied to single- and two-layer perceptron configurations.<
multilayer perceptrons, feedforward neural nets, identification, learning (artificial intelligence), stochastic processes, Gaussian processes, transient analysis, convergence
J. Shynk and N. Bershad, "On the system identification convergence model for perceptron learning algorithms," Proceedings of 1994 28th Asilomar Conference on Signals, Systems and Computers(ACSSC), Pacific Grove, CA, USA, 1995, pp. 879-886.