This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Combining Image Compression and Classification Using Vector Quantization
May 1995 (vol. 17 no. 5)
pp. 461-473

Abstract—Statistical clustering methods have long been used for a variety of signal processing applications, including both classification and vector quantization for signal compression. We describe a method of combining classification and compression into a single vector quantizer by incorporating a Bayes risk term into the distortion measure used in the quantizer design algorithm. Once trained, the quantizer can operate to minimize the Bayes risk weighted distortion measure if there is a model providing the required posterior probabilities, or it can operate in a suboptimal fashion by minimizing only squared error. Comparisons are made with other vector quantizer based classifiers, including the independent design of quantization and minimum Bayes risk classification and Kohonen’s LVQ. A variety of examples demonstrate that the proposed method can provide classification ability close to or superior to LVQ while simultaneously providing superior compression performance.

[1] P.C. Cosman,H.C. Davidson,C.J. Bergin,C. Tseng,L.E. Moses,R.A. Olshen,, and R.M. Gray,“The effect of lossy compression on diagnostic accuracy of thoracic CT images,” Radiology, vol. 190, no. 2, pp. 517-524, 1994.
[2] R. Hamabe,Y. Yamada,M. Murata,, and T. Namekawa,“A speech recognition system using inverse filter matching technique,” Proc. Ann. Conf. Inst. of Television Engineers, Kyushu University, June 1981(in Japanese).
[3] J.E. Shore and D.K. Burton,“Discrete utterance speech recognition without time alignment,” Proc. ICASSP, May 1982, p. 907.
[4] S.-S. Huang and R.M. Gray,“Spellmode recognition based on vector quantization,” Speech Communication, vol. 7, pp. 41-53, 1988.
[5] G.F. McLean,“Vector quantization for texture classification,” IEEE Trans. Systems, Man, and Cybernetics, vol. 23, no. 3, pp. 637-649, May/June 1993.
[6] K.L. Oehler and R.M. Gray,“Combining image classification and image compression using vector quantization,” Proc. 1993 IEEE Data Compression Conference, J.A. Storer and M. Cohn, Eds. Snowbird, Utah: IEEE Computer Society Press, Mar. 1993, pp. 2-11.
[7] H. Abut, Ed., Vector Quantization, IEEE Reprint Collection. Piscataway, N.J.: IEEE Press, May 1990.
[8] A. Gersho and R.M. Gray, Vector Quantization and Signal Compression. Boston: Kluwer Academic, 1992.
[9] S.P. Lloyd,“Least squares quantization in PCM,” Unpublished Bell Laboratories Technical Note. Portions presented at the Institute of Mathematical Statistics Meeting, Atlantic City, N.J., Sept. 1957. Published in the March 1982 special issue on quantization of the IEEE Trans. Information Theory, 1957.
[10] E. Forgey,“Cluster analysis of multivariate data: Efficiency vs. interpretability of classification,” Biometrics, vol. 21, p. 768, 1965 (Abstract).
[11] J. MacQueen,“Some methods for classification and analysis of multivariate observations,” Proc. 5th Berkeley Symp. on Math. Stat. and Prob., vol. 1, pp. 281-296, 1967.
[12] A.B. Nobel,“Histogram regression estimation using data-dependent partitions,” in preparation.
[13] G. Lugosi and A.B. Nobel,“Consistency of data-driven histogram methods for density estimation and classification,” Beckman Institute technical report UIUC-BI-93-01, University of Illi nois, Urbana-Champaign, 1993, submitted for publication.
[14] A.B. Nobel and R.A. Olshen,“Termination and continuity of greedy growing for tree structured vector quantizers,” submitted for publication.
[15] L. Breiman,J.H. Friedman,R.A. Olshen,, and C.J. Stone,Classification and Regression Trees.,Belmont, Calif.: Wadsworth, 1984.
[16] E.E. Hilbert,“Cluster compression algorithm: A joint clustering/data compression concept,” Publication 77-43, Pasadena, Calif.: Jet Propulsion Laboratory, Dec. 1977.
[17] H.V. Poor and J.B. Thomas, “Applications of Ali-Silvey Distance Measures in the Design of Generalized Quantizers for Binary Decision Systems,” IEEE Trans. Comm., vol. 25, pp. 893-900, Sept. 1977.
[18] G.R. Benitz and J.A. Bucklew,“Asymptotically optimal quantizers for detection of i.i.d. data,” IEEE Trans. Inform. Theory, vol. 35, pp. 316-325, 1989.
[19] E. Fix and J.L. Hodges, Jr.,“Discriminatory analysis, nonparametric discrimination, consistency properties,” Project 21-49-004, Report No. 4, Randolph Field, Texas: USAF School of Aviation Medicine, Feb. 1951.
[20] T.M. Cover and P. Hart, "Nearest Neighbor Pattern Classification," Proc. IEEE Trans. Information Theory, pp. 21-27, 1967.
[21] P.A. Devijver and J. Kittler,“On the edited nearest neighbor rule,” Proc. 5th Int’l Conf. on Pattern Recognition, 1980, pp. 72-80.
[22] Q. Xie, C.A. Laszlo, and R.K. Ward, “Vector Quantization Technique for Nonparametric Classifier Design,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 15, no. 12, pp. 1326-1330, Dec. 1993.
[23] K. Popat and R.W. Picard,“Novel cluster-based probability model for texture synthesis, classification, and compression,” Proc. SPIE Visual Comm. and Image Processing,Boston, Nov. 1993.
[24] K. Popat and R.W. Picard,“Cluster-based probability model applied to image restoration and compression,” Proc. ICASSP, Adelaide, Australia, 1994.
[25] T. Kohonen, "Self-Organization and Associated Memory," Berlin Heidelberg. New York: Springer-Verlag, 1988.
[26] T. Kohonen,G. Barna,R. Chrisley,“Statistical Pattern Recognition with Neural Networks: Benchmark Studies,” Proc. IEEE Int. Conf. on Neural Networks,San Diego, CA, pp. I-61-68, July 1988.
[27] C.J. Stone,“Consistent nonparametric regression,” Annals of Statistics, vol. 5, pp. 595-645, 1977.
[28] P.A. Chou, T. Lookabaugh, and R.M. Gray, "Entropy-Constrained Vector Quantization," IEEE Trans Acoustics, Speech and Signal Processing, vol. 37, pp. 31-42, 1989.
[29] P.A. Chou,T. Lookabaugh,, and R.M. Gray,“Optimal pruning with applications to tree-structured source coding and modeling,” IEEE Trans. Inform. Theory, vol. 35, no. 2, pp. 299-315, Mar. 1989.
[30] E.A. Riskin and R.M. Gray,“A greedy tree growing algorithm for the design of variable rate vector quantizers,” IEEE Trans. Signal Process., vol. 39, pp. 2,500-2,507, Nov. 1991.
[31] K.L. Oehler,P.C. Cosman,R.M. Gray,, and J. May,“Classification using vector quantization,” Conf. Record 25th Asilomar Conf. on Signals, Systems, and Computers,Pacific Grove, Calif., pp. 439-445, Nov. 1991.
[32] K.L. Oehler,Image Compression and Classification Using Vector Quantization, PhD Dissertation, Stanford University, 1993.
[33] J. Kramer,“Tree structured neural net classifier,” Tech. Rep. #19900805, Stanford University Center for Design Research, 1990.
[34] T. Kohonen,J. Kangas,J. Laaksonen,, and K. Torkkola,“LVQ_PAK: The learning vector quantization program package, version 2.1,” Tech. Rep., Helsinki University of Technology, Laboratory of Computer and Information Science, Finland, Oct. 1992.
[35] K. Perlmutter,R.M. Gray,K.L. Oehler,, and R.A. Olshen,“Bayes risk weighted tree structured vector quantization with estimated class posteriors,” Proc. IEEE Data Compression Conf.,Snowbird, Utah, April 1993, pp. 274-283.
[36] K.O. Perlmutter,R.M. Gray,K.L. Oehler,, and R.A. Olshen,“Bayes risk weighted vector quantization with estimated class posteriors,” submitted for publication, 1994.
[37] R.D. Wesel and R.M. Gray,“Bayes risk weighted VQ and learning VQ,” Proc. IEEE Data Compression Conf.,Snowbird, Utah, April 1994.

Index Terms:
Image compression, image classification, vector quantization, image coding, statistical clustering.
Citation:
Karen L. Oehler, Robert M. Gray, "Combining Image Compression and Classification Using Vector Quantization," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17, no. 5, pp. 461-473, May 1995, doi:10.1109/34.391396
Usage of this product signifies your acceptance of the Terms of Use.