The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2000 vol.22)
pp: 371-377
ABSTRACT
<p><b>Abstract</b>—Hidden Markov models (HMMs) are stochastic models capable of statistical learning and classification. They have been applied in speech recognition and handwriting recognition because of their great adaptability and versatility in handling sequential signals. On the other hand, as these models have a complex structure and also because the involved data sets usually contain uncertainty, it is difficult to analyze the multiple observation training problem without certain assumptions. For many years researchers have used Levinson's training equations in speech and handwriting applications, simply assuming that all observations are independent of each other. This paper presents a formal treatment of HMM multiple observation training without imposing the above assumption. In this treatment, the multiple observation probability is expressed as a combination of individual observation probabilities without losing generality. This combinatorial method gives one more freedom in making different dependence-independence assumptions. By generalizing Baum's auxiliary function into this framework and building up an associated objective function using the Lagrange multiplier method, it is proven that the derived training equations guarantee the maximization of the objective function. Furthermore, we show that Levinson's training equations can be easily derived as a special case in this treatment.</p>
INDEX TERMS
Hidden Markov model, forward-backward procedure, Baum-Welch algorithm, multiple observation training.
CITATION
Xiaolin Li, Marc Parizeau, Réjean Plamondon, "Training Hidden Markov Models with Multiple Observations-A Combinatorial Method", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.22, no. 4, pp. 371-377, April 2000, doi:10.1109/34.845379
20 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool