The Community for Technology Leaders
RSS Icon
Issue No.03 - March (2012 vol.34)
pp: 574-586
Li Liu , National University of Defense Technology, Changsha
Paul W. Fieguth , University of Waterloo, Waterloo
Inspired by theories of sparse representation and compressed sensing, this paper presents a simple, novel, yet very powerful approach for texture classification based on random projection, suitable for large texture database applications. At the feature extraction stage, a small set of random features is extracted from local image patches. The random features are embedded into a bag--of-words model to perform texture classification; thus, learning and classification are carried out in a compressed domain. The proposed unconventional random feature extraction is simple, yet by leveraging the sparse nature of texture images, our approach outperforms traditional feature extraction methods which involve careful design and complex steps. We have conducted extensive experiments on each of the CUReT, the Brodatz, and the MSRC databases, comparing the proposed approach to four state-of-the-art texture classification methods: Patch, Patch-MRF, MR8, and LBP. We show that our approach leads to significant improvements in classification accuracy and reductions in feature dimensionality.
Texture classification, random projections, sparse representation, compressed sensing, textons, image patches, bag of words.
Li Liu, Paul W. Fieguth, "Texture Classification from Random Features", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.34, no. 3, pp. 574-586, March 2012, doi:10.1109/TPAMI.2011.145
[1] B. Julesz, “Visual Pattern Discrimination,” IRE Trans. Information Theory, vol. 8, pp. 84-92, 1962.
[2] M. Tuceryan and A.K. Jain, “Texture Analysis,” Handbook Pattern Recognition and Computer Vision, C.H. Chen, L.F. Pau, and P.S.P. Wang, eds., ch. 2, pp. 235-276, World Scientific, 1993.
[3] T. Randen and J. Husøy, “Filtering for Texture Classification: A Comparative Study,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 21, no. 4, pp. 291-310, Apr. 1999.
[4] J. Zhang and T. Tan, “Brief Review of Invariant Texture Analysis Methods,” Pattern Recognition, vol. 35, no. 3, pp. 735-747, 2002.
[5] R.M. Haralick, K. Shanmugam, and I. Dinstein, “Textural Features for Image Classification,” IEEE Trans. Systems, Man, and Cybernetics, vol. 3, no. 6, pp. 610-621, Nov. 1973.
[6] J.S. Weszka, C.R. Dyer, and A. Rosenfeld, “A Comparative Study of Texture Measures for Terrain Classification,” IEEE Trans. Systems, Man, and Cybernetics, vol. 6 no. 4, pp. 269-285, Apr. 1976.
[7] G.R. Cross and A.K. Jain, “Markov Random Field Texture Models,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 5, no. 1, pp. 25-39, Jan. 1983.
[8] S.C. Zhu, Y. Wu, and D. Mumfors, “Filters, Random Fields and Maximum Entropy (FRAME): Towards a Unified Theory for Texture Modeling,” Int'l J. Computer Vision, vol. 27, no. 2, pp. 107-126, 1998.
[9] J. Mao and A.K. Jain, “Texture Classification and Segmentation Using Multiresolution Simultaneous Autoregressive Models,” Pattern Recognition, vol. 25, no. 2, pp. 173-188, 1992.
[10] L.M. Kaplan, “Extend Fractal Analysis for Texture Classification and Segmentation,” IEEE Trans. Image Processing, vol. 8, no. 11, pp. 1572-1585, Nov. 1999.
[11] A.C. Bovik, M. Clark, and W.S. Geisler, “Multichannel Texture Analysis Using Localized Spatial Filters,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 12, no. 1, pp. 55-73, Jan. 1990.
[12] B.S. Manjunath and W.Y. Ma, “Texture Features for Browsing and Retrieval of Image Data,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 18, no. 8, pp. 837-842, Aug. 1996.
[13] A.J. Heeger and J.R. Bergen, “Pyramid-Based Texture Analysis/Synthesis,” Proc. ACM Siggraph, pp. 229-238, 1995.
[14] T. Chang and C.-C. Kuo, “Texture Analysis and Classification with Tree-Structured Wavelet Transform,” IEEE Trans. Image Processing, vol. 2, no. 4, pp. 429-441, Oct. 1993.
[15] X. Qin and Y.H. Yang, “Basic Gray Level Aura Matrices: Theory and Its Application to Texture Synthesis,” Proc. IEEE Int'l Conf. Computer Vision, vol. 1, pp. 128-135, 2005.
[16] B. Olshausen and D. Field, “Sparse Coding with an Overcomplete Basis Set: A Strategy Employed by V1?” Vision Research, vol. 37, pp. 3311-3325, 1997.
[17] T. Ojala, M. Pietikainen, and T. Maenpaa, “Multiresolution Gray-Scale and Rotation Invariant Texture Classification with Local Binary Patterns,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 7, pp. 971-987, July 2002.
[18] T. Leung and J. Malik, “Representing and Recognizing the Visual Appearance of Materials Using Three-Dimensional Textons,” Int'l J. Computer Vision, vol. 43, no. 1, pp. 29-44, 2001.
[19] O.G. Cula and K.J. Dana, “3D Texture Recognition Using Bidirectional Feature Histograms,” Int'l J. Computer Vision, vol. 59, no. 1, pp. 33-60, 2004.
[20] M. Varma and A. Zisserman, “A Statistical Approach to Texture Classification from Single Images,” Int'l J. Computer Vision, vol. 62, nos. 1/2, pp. 61-81, 2005.
[21] M. Varma and A. Zisserman, “A Statistical Approach to Material Classification Using Image Patches,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 11, pp. 2032-2047, Nov. 2009.
[22] S. Lazebnik, C. Schmid, and J. Ponce, “A Sparse Texture Representation Using Local Affine Regions,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 8, pp. 1265-1278, Aug. 2005.
[23] J. Zhang, M. Marszalek, S. Lazebnik, and C. Schmid, “Local Features and Kernels for Classification of Texture and Object Categories: A Comprehensive Study,” Int'l J. Computer Vision, vol. 73, no. 2, pp. 213-238, 2007.
[24] E.J. Candès and T. Tao, “Decoding by Linear Programming” IEEE Trans. Information Theory, vol. 51, no. 12, pp. 4203-4215, Dec. 2005.
[25] E.J. Candès and T. Tao, “Near-Optimal Signal Recovery from Random Projections: Universal Encoding Stratigies?” IEEE Trans. Information Theory, vol. 52, no. 12, pp. 5406-5425, Dec. 2006.
[26] D.L. Donoho, “Compressed Sensing,” IEEE Trans. Information Theory, vol. 52, no. 4, pp. 1289-1306, Apr. 2006.
[27] G. Biau, L. Devroye, and G. Lugosi, “On the Performance of Clustering in Hilbert Spaces,” IEEE Trans. Information Theory, vol. 54, no. 2, pp. 781-790, Feb. 2008.
[28] T. Linder, “Learning Theoretic Methods in Vector Quantization,” Principles of Nonparametric Learning, Springer, July 2001.
[29] J.A. Tropp, “Topics in Sparse Approximation,” PhD dissertation, Univ. of Texas at Austin, 2004.
[30] M. Aharon, M. Elad, and A.M. Bruckstein, “The K-SVD: An Algorithm for Designing of Overcomplete Dictionaries for Sparse Representations,” IEEE Trans. Signal Processing, vol. 54, no. 11, pp. 4311-4322, Nov. 2006.
[31] S. Lazebnik and M. Raginsky, “Supervised Learning of Quanitizier Codebooks by Information Loss Minimization,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 7, pp. 1294-1309, July 2009.
[32] D. Pollard, “Quanitization and the Method of $k$ -Means,” IEEE Trans. Information Theory, vol. 28, no. 2, pp. 199-205, Mar. 1982.
[33] N. Goel, G. Bebis, and A. Nefian, “Face Recognition Experiments with Random Projection,” Proc. SPIE, vol. 5779, pp. 426-437, 2005.
[34] W.B. Johnson and J. Lindenstrauss, “Extensions of Lipschitz Mappings into a Hilbert Space,” Proc. Conf. Modern Analysis and Probability, pp. 189-206, 1984.
[35] S. Dasgupta and A. Gupta, “An Elementary Proof of a Theorem of Johnson and Lindenstrauss,” Random Structures and Algorithms, vol. 22, no. 1, pp. 60-65, 2003.
[36] M. Davenport, P. Boufounos, M. Wakin, and R. Baraniuk, “Signal Processing with Compressive Measurements,” IEEE J. Selected Topics in Signal Processing, vol. 4, no. 2, pp. 445-460, Apr. 2010.
[37] D. Achlioptas, “Database-Friendly Random Projections,” Proc. 20th ACM Symp. Principles of Database Systems, pp. 274-281, 2001.
[38] E. Bingham and H. Mannila, “Random Projection in Dimentionality Reduction: Applications to Image and Text Data,” Proc. Seventh ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, pp. 245-250, 2001.
[39] S. Dasgupta, “Experiments with Random Projections,” Proc. 16th Conf. Uncertainty in Artificial Intelligence, pp. 143-151, 2000.
[40] M.F. Duarte, M.A. Davenport, M.B. Wakin, J.N. Laskar, “Multiscale Random Projection for Compressive Classification,” Proc. Int'l Conf. Image Processing, 2007.
[41] H. Rauhut, K. Schnass, and P. Vandergheynst, “Compressed Sensing and Redundant Dictionaries,” IEEE Trans. Information Theory, vol. 54, no. 5, pp. 2210-2219, May 2008.
[42] R.G. Baraniuk, M. Davenport, R.A. DeVore, and M. Wakin, “A Simple Proof of the Restricted Isometry Property for Random Matrices,” Constructive Approximation, vol. 28, no. 3, pp. 253-263, 2008.
[43] X.Z. Fern and C.E. Brodley, “Random Projection for High Dimensional Data Clustering: A Cluster Ensemble Approach,” Proc. 20th Int'l Conf. Machine Learning, 2003.
[44] G. Peyré, “Sparse Modeling of Textures,” J. Math. Imaging and Vision, vol. 34, no. 1, pp. 17-31, 2009.
[45] J. Mairal, F. Bach, J. Ponce, G. Sapiro, and A. Zisserman, “Discriminative Learned Dictionaries for Local Image Analysis,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, 2008.
[46] K. Skretting and J.H. Husøy, “Texture Classification Using Sparse Frame-Based Representations,” EURASIP J. Applied Signal Processing, vol. 1, pp. 102-102, 2006.
[47] J.M. Duarte-Carvajalino and G. Sapiro, “Learning to Sense Sparse Signals: Simultaneous Sensing Matrix and Sparsifying Dictionary Optimization,” IEEE Trans. Image Processing, vol. 18, no. 7, pp. 1395-1408, July 2009.
[48] S. Dasgupta and Y. Freund, “Random Projection Trees for Vector Quantization,” IEEE Trans. Information Theory, vol. 55, no. 7, pp. 3229-3242, July 2009.
[49] J. Wright, A. Yang, A. Ganesh, S.S. Sastry, and Y. Ma, “Robust Face Recognition via Sparse Representation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 31, no. 2, pp. 210-217, Feb. 2009.
[50] R.M. Gray and D.L. Neuhoff, “Quantization,” IEEE Trans. Information Theory, vol. 44, no. 6, pp. 2325-2383, Oct. 1998.
[51] D. Needell and J.A. Tropp, “CoSaMP: Iterative Signal Recovery from Incomplete and Inaccurate Samples,” Applied and Computational Harmonic Analysis, vol. 26, no. 3, pp. 301-321, May 2009.
[52] J. Haupt, R. Castro, R. Nowak, G. Fudge, and A. Yeh, “Compressive Sampling for Signal Classification,” Proc. Asilomar Conf. Signals, Systems and Computers, pp. 1430-1434, 2006.
[53] E. Levina, “Statistical Issues in Texture Analysis,” PhD thesis, Univ. of California, Berkeley, 2002.
[54] S.T. Roweis and L.K. Saul, “Nonlinear Dimensionality Reduction by Locally Linear Embedding,” Science, vol. 290, pp. 2323-2326, 2000.
[55] S. Liao, M.W.K. Law, and A.C.S. Chung, “Dominant Local Binary Patterns for Texture Classification,” IEEE Trans. Image Processing, vol. 18, no. 5, pp. 1107-1118, May 2009.
[56] M. Pietikäinen, T. Nurmela, T. Mäenpää, and M. Turtinen, “View-Based Recognition of Real-World Textures,” Pattern Recognition, vol. 37, no. 2, pp. 313-323, 2004.
[57] T. Mäenpää and M. Pietikäinen, “Multi-Scale Binary Patterns for Texture Analysis,” Proc. Scandinavian Conf. Image Analysis, 2003.
[58] P. Brodatz, Textures: A Photographic Album for Artists and Designers. Dover Publications, 1966.
[59] S. Mallat, A Wavelet Tour of Signal Processing: The Sparse Way, third ed. Academic Press, Dec. 2008.
[60] E. Hayman, B. Caputo, M. Fritz, and J.-O. Eklundh, “On the Significance of Real-World Conditions for Material Classification,” Proc. European Conf. Computer Vision, pp. 253-266, 2004.
[61] S. Graf and H. Luschgy, Foundations of Quantization for Probability Distributions. Springer-Verlag, 2000.
5 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool