A New Approach to the Design of Reinforcement Schemes for Learning Automata: Stochastic Estimator Learning Algorithms
Issue No. 04 - August (1994 vol. 6)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/69.298183
<p>A new class of learning automata is introduced. The new automata use a stochastic estimator and are able to operate in nonstationary environments with high accuracy and a high adaptation rate. According to the stochastic estimator scheme, the estimates of the mean rewards of actions are computed stochastically. So, they are not strictly dependent on the environmental responses. The dependence between the stochastic estimates and the deterministic estimator's contents is more relaxed when the latter are old and probably invalid. In this way, actions that have not been selected recently have the opportunity to be estimated as "optimal", to increase their choice probability, and, consequently, to be selected. Thus, the estimator is always recently updated and consequently is able to be adapted to environmental changes. The performance of the Stochastic Estimator Learning Automaton (SELA) is superior to the previous well-known S-model ergodic schemes. Furthermore, it is proved that SELA is absolutely expedient in every stationary S-model random environment.</p>
finite automata; stochastic automata; unsupervised learning; reinforcement schemes; learning automata; stochastic estimator learning algorithms; stochastic estimator; nonstationary environments; high adaptation; SELA; Stochastic Estimator Learning Automaton; S-model ergodic scheme; absolute expediency
G. Papadimitriou, "A New Approach to the Design of Reinforcement Schemes for Learning Automata: Stochastic Estimator Learning Algorithms," in IEEE Transactions on Knowledge & Data Engineering, vol. 6, no. , pp. 649-654, 1994.