This Article 
   
 Share 
   
 Bibliographic References 
   
 Add to: 
 
Digg
Furl
Spurl
Blink
Simpy
Google
Del.icio.us
Y!MyWeb
 
 Search 
   
Iterative Kernel Principal Component Analysis for Image Modeling
September 2005 (vol. 27 no. 9)
pp. 1351-1366
In recent years, Kernel Principal Component Analysis (KPCA) has been suggested for various image processing tasks requiring an image model such as, e.g., denoising or compression. The original form of KPCA, however, can be only applied to strongly restricted image classes due to the limited number of training examples that can be processed. We therefore propose a new iterative method for performing KPCA, the Kernel Hebbian Algorithm which iteratively estimates the Kernel Principal Components with only linear order memory complexity. In our experiments, we compute models for complex image classes such as faces and natural images which require a large number of training examples. The resulting image models are tested in single-frame super-resolution and denoising applications. The KPCA model is not specifically tailored to these tasks; in fact, the same model can be used in super-resolution with variable input resolution, or denoising with unknown noise characteristics. In spite of this, both super-resolution and denoising performance are comparable to existing methods.

[1] S. Akkarakaran and P.P. Vaidyanathan, “The Role of Principal Component Filter Banks in Noise Reduction,” Wavelet Applications in Signal and Image Processing VII, pp. 346-357, vol. 3813, 1999.
[2] R.J. Baddeley and P.J.B. Hancock, “A Statistical Analysis of Natural Images Predicts Psychophysically Derived Orientation Tuning Curves,” Proc. Royal Soc. B, vol. 246, no. 1317, pp. 219-223, 1991.
[3] S. Baker and T. Kanade, “Limits on Super-Resolution and How to Break Them,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 24, no. 9, pp. 1167-1183, Sept. 2002.
[4] A.J. Bell and T.J. Sejnowski, “The 'Independent Components' of Natural Scenes are Edge Filters,” Vision Research, vol. 37, pp. 3327-3338, 1997.
[5] R.W. Buccigrossi and E.P. Simoncelli, “Image Compression via Joint Statistical Characterization in the Wavelet Domain,” IEEE Trans. Image Processing, vol. 8, no. 12, pp. 1688-1701, 1999.
[6] C.J.C. Burges, “Simplified Support Vector Decision Rules,” Proc. 13th Int'l Conf. Machine Learning, pp. 71-77, 1996.
[7] O. Chapelle and V. Vapnik, “Model Selection for Support Vector Machines,” Advances in Neural Information Processing Systems 12, S.A. Solla, T.K. Leen, and K.-R. Müller, eds. MIT Press, 2000.
[8] T. Chen, Y. Hua, and W.-Y. Yan, “Global Convergence of Oja's Subspace Algorithm for Principal Component Extraction,” IEEE Trans. Neural Networks, vol. 9, no. 1, pp. 58-67, 1998.
[9] H. Choi and R.G. Baraniuk, “Multiple Basis Wavelet Denoising Using Besov Projections,” Proc. IEEE Int'l Conf. Image Processing, pp. 595-599, 1999.
[10] N. Cristianini, J. Shawe-Taylor, A. Elisseeff, and J. Kandola, “On Kernel-Target Alignment,” Advances in Neural Information Processing Systems 14, T.G. Dietterich, S. Becker, and Z. Ghahramani, eds. Cambridge, Mass.: MIT Press, 2002.
[11] D.J. Field, “What Is the Goal of Sensory Coding?” Neural Computation, vol. 6, pp. 559-601, 1994.
[12] M.O. Franz and B. Schölkopf, “Implicit Estimation of Wiener Series,” Machine Learning in Signal Processing XIV, Proc. IEEE Signal Processing Soc. Workshop, A. Barros, J. Principe, J. Larsen, T. Adali, and S. Douglas, eds., pp. 735-744, 2004.
[13] W.T. Freeman, T.R. Jones, and E.C. Pasztor, “Example-Based Super-Resolution,” IEEE Computer Graphics and Applications, vol. 22, no. 2, pp. 56-65, 2002.
[14] W.T. Freeman, E.C. Pasztor, and O.T. Carmichael, “Learning Low-Level Vision,” Int'l J. Computer Vision, vol. 40, no. 1, pp. 25-47, 2000.
[15] A.S. Georghiades, P.N. Belhumeur, and D.J. Kriegman, “From Few to Many: Illumination Cone Models for Face Recognition under Variable Lighting and Pose,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 6, pp. 643-660, June 2001.
[16] F. Girosi, “An Equivalence between Sparse Approximation and Support Vector Machines,” Neural Computation, vol. 10, no. 6, pp. 1455-1480, 1998.
[17] R.C. Gonzalez and R.E. Woods, Digital Image Processing. Addison Wesley, 1992.
[18] U. Grenander and A. Srivastava, “Probability Models for Clutter in Natural Images,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 4, pp. 424-429, Apr. 2001.
[19] J. Ham, D.D. Lee, S. Mika, and B. Schölkopf, “A Kernel View of the Dimensionality Reduction of Manifolds,” Technical Report 110, Max-Planck-Insitut für Biologische Kybernetik, Tübingen, Germany, July 2003.
[20] P.J.B. Hangcock, R.J. Baddely, and L.S Smith, “The Principal Components of Natural Images,” Network, vol. 3, pp. 61-70, 1992.
[21] S. Hayk, Neural Networks: A Comprehensive Foundation, second ed. Prentice Hall, 1999.
[22] R. Herbrich, Learning Kernel Classifiers: Theory and Algorithms. Cambridge, Mass.: MIT Press, 2001.
[23] A. Hertzmann, C.E. Jacobs, N. Oliver, B. Curless, and D.H. Salesin, “Image Analogies,” Computer Graphics (Proc. Siggraph 2001), pp. 327-340, 2001.
[24] J. Hurri, A. Hyvärinen, J. Karhunen, and E. Oja, “Image Feature Extraction Using Independent Component Analysis,” Proc. IEEE 1996 Nordic Conf. Signal Processing (NORSIG '96), 1996.
[25] R. Keys, “Cubic Convolution Interpolation for Digital Image Processing,” IEEE Trans. Acoustics, Speech, Signal Processing, vol. 29, no. 6, pp. 1153-1160, 1981.
[26] K.I. Kim, M.O. Franz, and B. Schölkopf, “Kernel Hebbian Algorithm for Iterative Kernel Principal Component Analysis,” Technical Report 109, Max-Planck-Insitut für Biologische Kybernetik, Tübingen, Germany, June 2003.
[27] J.T. Kwok, B. Mak, and S. Ho, “Eigenvoice Speaker Adaptation via Composite Kernel Principal Component Analysis,” Advances in Neural Information Processing Systems, S. Thrun, L. Saul, and B. Schölkopf, eds. Cambridge, Mass.: MIT Press, 2004.
[28] J.T. Kwok and I.W. Tsang, “The Pre-Image Problem in Kernel Methods,” IEEE Trans. Neural Networks, vol. 15, no. 6, pp. 1517-1525, 2004.
[29] A.B. Lee, D. Mumford, and J. Huang, “Occlusion Models for Natural Images: A Statistical Study of a Scale-Invariant Dead Leaves Model,” Int'l J. Computer Vision, vol. 41, nos. 1/2, pp. 35-59, 2001.
[30] J. Malik, S. Belongie, T. Leung, and J. Shi, “Contour and Texture Analysis for Image Segmentation,” Int'l J. Computer Vision, vol. 43, no. 1, pp. 7-27, 2001.
[31] S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K.-R. Müller, “Fisher Discriminant Analysis with Kernels,” Neural Networks for Signal Processing IX, Y.-H. Hu, J. Larsen, E. Wilson, and S. Douglas, eds., pp. 41-48. IEEE, 1999.
[32] S. Mika, B. Schölkopf, A.J. Smola, K.-R. Müller, M. Scholz, and G. Rätsch, “Kernel PCA and De-Noising in Feature Spaces,” Advances in Neural Information Processing Systems 11, M.S. Kearns, S.A. Solla, and D.A. Cohn, eds., pp. 536-542. Cambridge, Mass.: MIT Press, 1999.
[33] D.C. Munson, “A Note on Lena,” IEEE Trans. Image Processing, vol. 5, no. 1, 1996.
[34] E. Oja, “A Simplified Neuron Model as a Principal Component Analyzer,” J. Math. Biology, vol. 15, pp. 267-273, 1982.
[35] E. Oja, “Principal Components, Minor Components, and Linear Neural Networks,” Neural Networks, vol. 5, pp. 927-935, 1992.
[36] B.A. Olshausen and D.J. Field, “Emergence of Simple-Cell Receptive Field Properties by Learning a Sparse Code for Natural Images,” Nature, vol. 381, pp. 607-609, 1996.
[37] A. Pizurica and W. Philips, “Estimating Probability of Presence of a Signal of Interest in Multiresolution Single- and Multiband Image Denoising,” IEEE Trans. Image Processing, in press.
[38] T. Poggio and F. Girosi, “Extension of a Theory of Networks for Approximation and Learning: Dimensionality Reduction and Clustering,” Proc. Image Understanding Workshop, pp. 597-603, 1990.
[39] S. Romdhani, S. Gong, and A. Psarrou, “A Multiview Nonlinear Active Shape Model Using Kernel PCA,” Proc. British Machine Vision Conf., pp. 483-492, 1999.
[40] D.L. Ruderman, “Origins of Scaling in Natural Images,” Vision Research, vol. 37, no. 23, pp. 3385-3395, 1997.
[41] T.D. Sanger, “Optimal Unsupervised Learning in a Single-Layer Linear Feedforward Neural Network,” Neural Networks, vol. 12, pp. 459-473, 1989.
[42] B. Schölkopf and A. Smola, Learning with Kernels. Cambridge, Mass.: MIT Press, 2002.
[43] B. Schölkopf, A. Smola, and K. Müller, “Nonlinear Component Analysis as a Kernel Eigenvalue Problem,” Neural Computation, vol. 10, no. 5, pp. 1299-1319, 1998.
[44] E.P. Simoncelli, “Bayesian Denoising of Visual Images in the Wavelet Domain,” Bayesian Inference in Wavelet Based Models, P. Müller and B. Vidakovic, eds., pp. 291-308. New York: Springer, 1999.
[45] I. Steinwart, “On the Influence of the kernel on the Consistency of Support Vector Machines,” J. Machine Learning Research, vol. 2, pp. 67-93, 2001.
[46] A. Turk and A.P. Pentland, “Face Recognition Using Eigenfaces,” Proc. IEEE Conf. Computer Vision and Pattern Recognition, pp. 586-591, 1991.
[47] C.J. Twining and C.J. Taylor, “Kernel Principal Component Analysis and the Construction of Non-Linear Active Shape Models,” Proc. British Machine Vision Conf., pp. 23-32, 2001.
[48] V. Vapnik and O. Chapelle, “Bounds on Error Expectation for SVM,” Advances in Large Margin Classifiers, A.J. Smola, P.L. Bartlett, B. Schölkopf, and D. Schuurmans, eds., pp. 261-280. Cambridge, Mass.: MIT Press, 2000.
[49] U. von Luxburg, O. Bousquet, and B. Schölkopf, “A Compression Approach to Support Vector Model Selection,” The J. Machine Learning Research, vol. 5, pp. 293-323, 2004.
[50] S. Yoshizawa, U. Helmke, and K. Starkov, “Convergence Analysis for Principal Component Flows,” Int'l J. Applied Math. Computer Science, vol. 11, no. 1, pp. 223-236, 2001.
[51] S. Zhu and D. Mumford, “Prior Learning and Gibbs Reaction-Diffusion,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 19, no. 11, pp. 1236-1250, Nov. 1997.

Index Terms:
Index Terms- Principal component analysis, kernel methods, image models, image enhancement, unsupervised learning.
Citation:
Kwang In Kim, Matthias O. Franz, Bernhard Sch?lkopf, "Iterative Kernel Principal Component Analysis for Image Modeling," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 27, no. 9, pp. 1351-1366, Sept. 2005, doi:10.1109/TPAMI.2005.181
Usage of this product signifies your acceptance of the Terms of Use.