The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January (1994 vol.43)
pp: 104-109
ABSTRACT
<p>Presents novel sequential and parallel learning techniques for codebook design in vector quantizers using neural network approaches. These techniques are used in the training phase of the vector quantizer design. These learning techniques combine the split-and-cluster methodology of the traditional vector quantizer design with neural learning, and lead to better quantizer design (with fewer distortions). The sequential learning approach overcomes the code word underutilization problem of the competitive learning network. As a result, this network only requires partial or zero updating, as opposed to full neighbor updating as needed in the self organizing feature map. The parallel learning network, while satisfying the above characteristics, also leads to parallel learning of the codewords. The parallel learning technique can be used for faster codebook design in a multiprocessor environment. It is shown that this sequential learning scheme can sometimes outperform the traditional LBG algorithm, while the parallel learning scheme performs very close to the LGB and the sequential learning algorithms.</p>
INDEX TERMS
learning (artificial intelligence); neural nets; parallel algorithms; parallel neural network; vector quantizers; parallel learning techniques; codebook design; neural learning; sequential learning; competitive learning; self organizing feature map; parallel learning.
CITATION
K.K. Parhi, F.H. Wu, K. Genesan, "Sequential and Parallel Neural Network Vector Quantizers", IEEE Transactions on Computers, vol.43, no. 1, pp. 104-109, January 1994, doi:10.1109/12.250614
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool