The Community for Technology Leaders
Acoustics, Speech, and Signal Processing, IEEE International Conference on (2000)
Istanbul, Turkey
June 5, 2000 to June 9, 2000
ISBN: 0-7803-6293-4
pp: 1253-1256
B.E. Shi , Dept. of Electr. & Electron. Eng., Hong Kong Univ. of Sci. & Technol., Kowloon, China
ABSTRACT
Minimum classification error (MCE) rate training is a discriminative training method which seeks to minimize an empirical estimate of the error probability derived over a training set. The segmental generalized probabilistic descent (GPD) algorithm for MCE uses the log likelihood of the best path as a discriminant function to estimate the error probability. This paper shows that by using a discriminant function similar to the auxiliary function used in EM, we can obtain a "soft" version of GPD in the sense that information about all possible paths is retained. Complexity is similar to segmental GPD. For certain parameter values, the algorithm is equivalent to segmental GPD. By modifying the misclassification measure usually used, we can obtain an algorithm for embedded MCE training for continuous speech which does not require a separate N-best search to determine competing classes. Experimental results show error rate reduction of 20% compared with maximum likelihood training.
INDEX TERMS
CITATION

B. Shi, K. Yao and Z. Cao, "Soft GPD for minimum classification error rate training," Acoustics, Speech, and Signal Processing, IEEE International Conference on(ICASSP), Istanbul, Turkey, 2000, pp. 1253-1256.
doi:10.1109/ICASSP.2000.861803
413 ms
(Ver 3.3 (11022016))