This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Optimization of Weighted Finite State Transducer for Speech Recognition
Aug. 2013 (vol. 62 no. 8)
pp. 1607-1615
Louis-Marie Aubert, Application Solutions (Electronics and Vision) Limited, Lewes
Roger Woods, Queen's University Belfast, Belfast
Scott Fischaber, Analytics Engines Ltd, Belfast
Richard Veitch, Maxeler, London
There is considerable interest in creating embedded, speech recognition hardware using the weighted finite state transducer (WFST) technique but there are performance and memory usage challenges. Two system optimization techniques are presented to address this; one approach improves token propagation by removing the WFST epsilon input arcs; another one-pass, adaptive pruning algorithm gives a dramatic reduction in active nodes to be computed. Results for memory and bandwidth are given for a 5,000 word vocabulary giving a better practical performance than conventional WFST; this is then exploited in an adaptive pruning algorithm that reduces the active nodes from 30,000 down to 4,000 with only a 2 percent sacrifice in speech recognition accuracy; these optimizations lead to a more simplified design with deterministic performance.
Index Terms:
Hidden Markov models,Speech recognition,Bandwidth,Speech,Decoding,Acoustics,Loading,WFST,Embedded processors,memory organization,speech recognition
Citation:
Louis-Marie Aubert, Roger Woods, Scott Fischaber, Richard Veitch, "Optimization of Weighted Finite State Transducer for Speech Recognition," IEEE Transactions on Computers, vol. 62, no. 8, pp. 1607-1615, Aug. 2013, doi:10.1109/TC.2013.51
Usage of this product signifies your acceptance of the Terms of Use.