The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.01 - January (1994 vol.16)
pp: 54-65
ABSTRACT
<p>Examines how the performance of a memoryless vector quantizer changes as a function of its training set size. Specifically, the authors study how well the training set distortion predicts test distortion when the training set is a randomly drawn subset of blocks from the test or training image(s). Using the Vapnik-Chervonenkis (VC) dimension, the authors derive formal bounds for the difference of test and training distortion of vector quantizer codebooks. The authors then describe extensive empirical simulations that test these bounds for a variety of codebook sizes and vector dimensions, and give practical suggestions for determining the training set size necessary to achieve good generalization from a codebook. The authors conclude that, by using training sets comprising only a small fraction of the available data, one can produce results that are close to the results obtainable when all available data are used.</p>
INDEX TERMS
vector quantisation; image coding; learning systems; statistics; small training sets; memoryless vector quantizer; training set distortion; test distortion; training image; Vapnik-Chervonenkis dimension; formal bounds; vector quantizer codebooks; empirical simulations
CITATION
D. Cohn, E.A. Riskin, R. Ladner, "Theory and Practice of Vector Quantizers Trained on Small Training Sets", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.16, no. 1, pp. 54-65, January 1994, doi:10.1109/34.273717
19 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool