Automatic Speech Activity Detection, Source Localization, and Speech Recognition on the Chil Seminar Corpus
2005 IEEE International Conference on Multimedia and Expo (2005)
July 6, 2005 to July 6, 2005
D. Macho , TALP Research Center, Universitat Politécnica de Catalunya, Barcelona, Spain
To realize the long-term goal of ubiquitous computing, technological advances in multi-channel acoustic analysis are needed in order to solve several basic problems, including speaker localization and tracking, speech activity detection (SAD) and distant-talking automatic speech recognition (ASR). The European Commission integrated project CHIL, “ Computers in the Human Interaction Loop”, aims to make significant advances in these three technologies. In this work, we report the results of our initial automatic source localization, speech activity detection, and speech recognition experiments on the CHIL seminar corpus, which is comprised of spontaneous speech collected by both near- and far-field microphones. In addition to the audio sensors, the seminars were also recorded by calibrated video cameras. This simultaneous audio-visual data capture enables the realistic evaluation of component technologies as was never possible with earlier data bases.
S. Chu et al., "Automatic Speech Activity Detection, Source Localization, and Speech Recognition on the Chil Seminar Corpus," 2005 IEEE International Conference on Multimedia and Expo(ICME), Amsterdam, Netherlands, 2005, pp. 876-879.