
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
J.B. Hampshire II, A. Waibel, "The MetaPi Network: Building Distributed Knowledge Representations for Robust Multisource Pattern Recognition," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14, no. 7, pp. 751769, July, 1992.  
BibTex  x  
@article{ 10.1109/34.142911, author = {J.B. Hampshire II and A. Waibel}, title = {The MetaPi Network: Building Distributed Knowledge Representations for Robust Multisource Pattern Recognition}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {14}, number = {7}, issn = {01628828}, year = {1992}, pages = {751769}, doi = {http://doi.ieeecomputersociety.org/10.1109/34.142911}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  The MetaPi Network: Building Distributed Knowledge Representations for Robust Multisource Pattern Recognition IS  7 SN  01628828 SP751 EP769 EPD  751769 A1  J.B. Hampshire II, A1  A. Waibel, PY  1992 KW  speech recognition; MetaPi network; distributed knowledge representations; multisource pattern recognition; multinetwork connectionist classifier; adaptive Bayesian maximum a posteriori classifier; multispeaker phoneme recognition; speakerdependent timedelay neural network; Bayes methods; knowledge representation; neural nets; speech recognition; statistical analysis VL  14 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
The authors present the MetaPi network, a multinetwork connectionist classifier that forms distributed lowlevel knowledge representations for robust pattern recognition, given random feature vectors generated by multiple statistically distinct sources. They illustrate how the MetaPi paradigm implements an adaptive Bayesian maximum a posteriori classifier. They also demonstrate its performance in the context of multispeaker phoneme recognition in which the MetaPi superstructure combines speakerdependent timedelay neural network (TDNN) modules to perform multispeaker /b,d,g/ phoneme recognition with speakerdependent error rates of 2%. Finally, the authors apply the MetaPi architecture to a limited sourceindependent recognition task, illustrating its discrimination of a novel source. They demonstrate that it can adapt to the novel source (speaker), given five adaptation examples of each of the three phonemes.
[1] A. Barron, "Statistical learning networks: A unifying view," presented at the Symp. Interface: Stat. Comput. Sci., 1988.
[2] H. Bourlard and C. Wellekens, "Links between Markov models and multilayer perceptrons,"IEEE Trans. Patt. Anal. Machine Intell., vol. 12, no. 12, pp. 11671178, Dec. 1990.
[3] D. E. Rumelhartet al., Parallel Distributed Processing. Cambridge, MA: MIT Press, 1987.
[4] L. Devroye, "Automatic pattern recognition: A study of the probability of error,"IEEE Trans. Patt. Anal. Machine Intell., vol. 10, no. 4, pp. 530543, July 1988.
[5] R. O. Duda and P. E. Hart,Pattern Classification and Scene Analysis. New York: Wiley, 1973.
[6] H. Gish, "A probabilistic approach to the understanding and training of neural network classifiers," inProc. 1990 IEEE Int. Conf. Acoustics Speech Signal Processing, Apr. 1990, pp. 13611364, vol. 3.
[7] J. B. Hampshire, II, B. A. Pearlmutter, and B. V. K. Vijaya Kumar, "General equivalence proofs for supervised pattern classifiers and the Bayesian discriminant function," to be published; this is a revised and extended version of "Equivalence proofs for multilayer perceptron clasifiers and the Bayesian discriminant function," inProc. 1990 Connectionist Models Summer School, 1991, pp. 159172.
[8] J. B. Hampshire, II and A. H. Waibel, "The MetaPi network: Building distributed knowledge representations for robust pattern recognition, Tech. Rep. CMUCS89166R, Carnegie Mellon Univ., School Comput. Sci., Aug. 1989.
[9] J. B. Hampshire, II and A. H. Waibel, "A novel objective function for improved phoneme recognition using timedelay neural networks,"IEEE Trans. Neural Networks, vol. 1, no. 2, pp. 216228, June 1990.
[10] V. Hasselblad, "Estimation of parameters for a mixture of normal distributions,"Technometrics, vol. 8, pp. 431444, 1966.
[11] W. H. Highleyman, "The design and analysis of pattern recognition experiments,"Bell Syst. Tech. J., vol. 41, pp. 723744, Mar. 1962.
[12] G. E. Hinton, "Connectionist learning procedures," inMachine Learning: Paradigms and Methods(J. G. Carbonell, Ed.). Cambridge, MA: MIT Press, 1990, pp. 185234.
[13] H. W. Hon, personal communication regarding the SPHINX speech recognition system, Apr. 1990.
[14] X. D. Huang, K. F. Lee, and H. W. Hon, "On semicontinuous hidden Markov modeling," inProc. IEEE 1990 Conf. Acoustics Speech Signal Processing, 1990, vol. S2VA, pp. 689692.
[15] R. Jacobs, "Initial experiments on constructing domains of expertise and hierarchies in connectionist systems," inProc. 1988 Connectionist Models Summer School(San Mateo, CA), 1988, pp. 144153.
[16] R. Jacobs, "Task decomposition through competition in a modular connectionist architecture," Tech. Rep. COINS9027, Univ. of Mass., Dept. of Comput. Inform. Sci., Mar. 1990.
[17] R. Jacobs, M. Jordan, S. Nowlan, and G. Hinton, "Adaptive mixtures of local experts,"Neural Computation, vol. 3, no. 1, Jan. 1991.
[18] B. Kämmerer and W. Küper, "Design of hierarchical perceptron structures and their application to the task of isolated word recognition," inIEEE Proc. 1989 Int. Joint Conf. Neural Networks(Washington, DC), June 1989, pp. 243249.
[19] K. J. Lang, "A timedelay neural network architecture for speech recognition," Ph.D. thesis, Carnegie Mellon Univ., School of Comput. Sci., July 1989.
[20] K. F. Lee, "Large vocabulary speakerindependent continuous speech recognition: The SPHINX system," Tech. Rep. CMUCS88148, Carnegie Mellon Univ., School of Comput. Sci., Apr. 1988.
[21] H. C. Leung and V. W. Zue, "Applications of error backpropagation to phonetic classification," inAdvances in Neural Information Processing Systems, vol. 1(D. S. Touretzky, Ed.). San Mateo, CA: Morgan Kauffman, 1989, pp. 206214.
[22] R. Lippmann, "Pattern classification using neural networks,"IEEE Commun. Mag., vol. 27, no. 11, 1989.
[23] P. Maloney and D. Specht, "The use of probabilistic neural networks to improve solution times for hulltoemitter correlation problems," inProc. IEEE 1989 Int. Conf. Neural Networks, June 1989, pp. 289294, vol. 1.
[24] T. Matsuoka, H. Hamada, and R. Nakatsu, "Syllable recognition using integrated neural networks," inProc. IEEE 1989 Int. Joint Conf. Neural Networks, June 1989, pp. 251258, vol. 1.
[25] S. J. Nowlan, "Competing experts: An experimental investigation of associative mixture models," Tech. Rep. CRGTR905, Univ. of Toronto, Dept. of Comput. Sci., Sept. 1990.
[26] S. J. Nowlan, "Maximum likelihood competitive learning," inAdvances in Neural Information Processing Systems, vol. 2(D. S. Touretzky, Ed.). San Mateo, CA: Morgan Kauffman, 1990, pp. 574582.
[27] S. J. Nowlan, "Soft competitive adaptation: Neural network learning algorithms based on fitting statistical mixtures," Ph.D. thesis CMUCS91 126, Carnegie Mellon Univ., Pittsburgh, PA, Apr. 1991.
[28] D. O'Shaugnessy,Speech Communication: Human and Machine. Reading, MA: AddisonWesley, 1987.
[29] J. Pollack, "Cascaded backpropagation on dynamic connectionist networks," inProc. Ninth Ann. Conf. Cognitive Sci. Soc., 1987, pp. 391404, 1987.
[30] D. A. Pomerleau, G. L. Gusciora, D. S. Touretzky, and H. T. Kung, "Neural network simulation at warp speed: How we got 17 million connections per second," inProc. Int. Conf. Neural Networks, San Diego, CA, June 1988.
[31] D. A. Pomerleau, "The metageneralized delta rule: A new algorithm for learning in connectionist networks," Tech. Rep. CMUCS87165, Carnegie Mellon Univ., School of Comput. Sci., Sept. 1987.
[32] L. Rabiner, "A tutorial on hidden Markov models and selected applications in speech recognition,"Proc. IEEE, vol. 77, pp. 257286, Feb. 1989.
[33] R. A. Redner and H. F. Walker, "Mixture densities, maximum likelihood and the EM algorithm,"SIAM Rev., vol. 26, pp. 195239, 1984.
[34] D. Rtischev, "Speaker adaptation in a large vocabulary speech recognition system," Master's thesis, Mass. Inst. of Technol., Jan. 1989.
[35] D. W. Ruck, S. K. Rogers, M. Kabrinsky, M. E. Oxley, and B. W. Sutter, "The multilayer perceptron as an approximation to a Bayes optimal discriminant function,"IEEE Trans. Neural Networks, vol. 1, no. 4, pp. 296298, Dec. 1990.
[36] D. E. Rumelhart, G. E. Hinton, and R. J. Williams, "Learning representations by backpropagation errors,"Nature, vol. 323, pp. 533536, Oct. 1986.
[37] P. A. Shoemaker, "A Note on leastsquares learning procedures and classification by neural network models,"IEEE Trans. Neural Networks, vol. 2, no. 1, pp. 158160, Jan. 1991.
[38] D. Specht, "Probabilistic neural networks for classification, mapping, and associative memory, inProc. IEEE 1988 Int. Conf. Neural Networks, July 1988, pp. 525532, vol. 1.
[39] R. M. Stern and M. J. Lasry, "Dynamic Speaker Adaptation for FeatureBased Isolated Word Recognition,"IEEE Trans. Acoustics Speech Signal Processing, vol. ASSP35, pp. 751763, June 1987.
[40] A. Waibel, T. Hanazawa, G. Hinton, K. Shikano, and K. Lang, "Phoneme recognition using timedelay neural networks,"IEEE Trans. Acoust. Speech Signal Processing, vol. 37, no. 3, 1989.
[41] A. Waibel, H. Sawai, and K. Shikano, "Modularity and scaling in large phonemic neural networks,"IEEE Trans. Acoustics Speech Signal Processing, vol. ASSP37, pp. 18881898, Dec. 1989.
[42] E. A. Wan, "Neural network classification: A Bayesian interpretation,"IEEE Trans. Neural Networks, vol. 1, no. 4, pp. 303305, Dec. 1990.
[43] E. A. Wan, "Temporal backpropagation: An efficient algorithm for finite impulse response neural networks," inProc. 1990 Connectionist Models Summer School(San Mateo, CA), 1991, pp. 159172.
[44] R. Watrous, "Contextmodulated discrimination of similar vowels using secondorder connectionist networks," Tech. Rep. CRGTR895, Univ. of Toronto, Dept. of Comput. Sci., Dec. 1989.
[45] H. White, "Learning in artificial neural networks: A statistical perspective,Neural Computation, vol. 1, no. 4, pp. 425464, Winter 1989.