
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
P. Honeine, "Online Kernel Principal Component Analysis: A ReducedOrder Model," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 34, no. 9, pp. 18141826, Sept., 2012.  
BibTex  x  
@article{ 10.1109/TPAMI.2011.270, author = {P. Honeine}, title = {Online Kernel Principal Component Analysis: A ReducedOrder Model}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {34}, number = {9}, issn = {01628828}, year = {2012}, pages = {18141826}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2011.270}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Online Kernel Principal Component Analysis: A ReducedOrder Model IS  9 SN  01628828 SP1814 EP1826 EPD  18141826 A1  P. Honeine, PY  2012 KW  reduced order systems KW  data analysis KW  function approximation KW  principal component analysis KW  iterative kernelPCA KW  online kernel principal component analysis KW  reducedorder model KW  data analysis KW  dimensionality reduction techniques KW  online algorithm KW  Oja rule KW  linear principal axe extraction KW  kernelbased machines KW  principal function approximation KW  synthetic data set KW  handwritten digit image KW  classical kernelPCA KW  Kernel KW  Principal component analysis KW  Eigenvalues and eigenfunctions KW  Dictionaries KW  Algorithm design and analysis KW  Data models KW  Training data KW  recursive algorithm. KW  Principal component analysis KW  online algorithm KW  machine learning KW  reproducing kernel KW  Oja's rule VL  34 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] I. Jolliffe, Principal Component Analysis. SpringerVerlag, 1986.
[2] P.E. Gill, G.H. Golub, W. Murray, and M.A. Saunders, "Methods for Modifying Matrix Factorizations," Math. Computation, vol. 28, pp. 505535, Apr. 1974.
[3] J.R. Bunch and C.P. Nielsen, "Updating the Singular Value Decomposition," Numerische Mathematik, vol. 31, pp. 111129, 1978.
[4] P. Hall, D. Marshall, and R. Martin, "Merging and Splitting Eigenspace Models," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 9, pp. 10421049, Sept. 2000.
[5] E. Oja, "A Simplified Neuron Model as a Principal Component Analyzer," J. Math. Biology, vol. 15, pp. 267273, 1982.
[6] E. Oja and J. Karhunen, "On Stochastic Approximation of the Eigenvectors and Eigenvalues of the Expectation of a Random Matrix," J. Math. Analysis and Applications, vol. 106, pp. 6984, 1985.
[7] T.D. Sanger, "Optimal Unsupervised Learning in a SingleLayer Linear Feedforward Neural Network," Neural Networks, vol. 2, pp. 459473, 1989.
[8] T.D. Sanger, "Two Iterative Algorithms for Computing the Singular Value Decomposition from Input/Output Samples," Advances in Neural Information Processing Systems, J.D. Cowan, G. Tesauro, and J. Alspector, eds., vol. 6, pp. 144151, 1993.
[9] N. Aronszajn, "Theory of Reproducing Kernels," Trans. Am. Math. Soc., vol. 68, pp. 337404, 1950.
[10] V. Vapnik, The Nature of Statistical Learning Theory. SpringerVerlag, 1995.
[11] M. Aizerman, E. Braverman, and L. Rozonoer, "Theoretical Foundations of the Potential Function Method in Pattern Recognition Learning," Automation and Remote Control, vol. 25, pp. 821837, 1964.
[12] S. Mika, G. Rätsch, J. Weston, B. Schölkopf, and K. Müller, "Fisher Discriminant Analysis with Kernels," Advances in Neural Networks for Signal Processing, Y.H. Hu, J. Larsen, E. Wilson, and S. Douglas, eds., pp. 4148, Morgan Kaufmann, 1999.
[13] R. Rosipal and L. Trejo, "Kernel Partial Least Squares Regression in Reproducing Kernel Hilbert Space," J. Machine Learning Research, vol. 2, pp. 97123, 2002.
[14] V. Guigue, A. Rakotomamonjy, and S. Canu, "Kernel Basis Pursuit," Proc. 16th European Conf. Machine Learning, J. Gama, R. Camacho, P. Brazdil, A. Jorge, and L. Torgo, eds., pp. 146157, 2005.
[15] J. ShaweTaylor and N. Cristianini, Kernel Methods for Pattern Analysis. Cambridge Univ. Press, 2004.
[16] B. Schölkopf, A. Smola, and K. Müller, "Nonlinear Component Analysis as a Kernel Eigenvalue Problem," Neural Computation, vol. 10, no. 5, pp. 12991319, 1998.
[17] K. Kim, M. Franz, and B. Schölkopf, "Kernel Hebbian Algorithm for Iterative Kernel Principal Component Analysis," Technical Report 109, MaxPlanckInstitut für Biologische Kybernetik, Tübingen, Germany, 06, 2003.
[18] K. Kim, M. Franz, and B. Schölkopf, "Iterative Kernel Principal Component Analysis for Image Modeling," IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 27, no. 9, pp. 13511366, Sept. 2005.
[19] J. Kivinen, A.J. Smola, and R.C. Williamson, "Online Learning with Kernels," IEEE Trans. Signal Processing, vol. 52, no. 8, pp. 21652176, Aug. 2004.
[20] S. Smale and Y. Yao, "Online Learning Algorithms," Foundation Computational Math., vol. 6, no. 2, pp. 145170, 2006.
[21] S.V. Vishwanathan, N.N. Schraudolph, and A.J. Smola, "Step Size Adaptation in Reproducing Kernel Hilbert Space," J. Machine Learning Research, vol. 7, pp. 11071133, 2006.
[22] K. Crammer, O. Dekel, J. Keshet, S. ShalevShwartz, and Y. Singer, "Online PassiveAggressive Algorithms," J. Machine Learning Research, vol. 7, pp. 551585, 2006.
[23] C. Richard, J.C.M. Bermudez, and P. Honeine, "Online Prediction of Time Series Data with Kernels," IEEE Trans. Signal Processing, vol. 57, no. 3, pp. 10581067, Mar. 2009.
[24] G. Kimeldorf and G. Wahba, "Some Results on Tchebycheffian Spline Functions," J. Math. Analysis and Applications, vol. 33, pp. 8295, 1971.
[25] B. Schölkopf, R. Herbrich, and R. Williamson, "A Generalized Representer Theorem," Technical Report NC2TR200081, NeuroCOLT, Royal Holloway College, Univ. of London, 2000.
[26] Y. Engel, S. Mannor, and R. Meir, "The Kernel Recursive Least Squares Algorithm," IEEE Trans. Signal Processing, vol. 52, no. 8, pp. 22752285, Aug. 2004.
[27] L. Csató and M. Opper, "Sparse Representation for Gaussian Process Models," Advances in Neural Information Processing Systems, T.K. Leen, T.G. Dietterich, and V. Tresp, eds., vol. 13, pp. 444450, 2001.
[28] M. Ouimet and Y. Bengio, "Greedy Spectral Embedding," Proc. 10th Int'l Workshop Artificial Intelligence and Statistics, R.G. Cowell and Z. Ghahramani, eds., pp. 253260, 2005.
[29] M.E. Tipping, "Sparse Kernel Principal Component Analysis," Advances in Neural Information Processing Systems, T.K. Leen, T.G. Dietterich, and V. Tresp, eds., vol. 13, pp. 633639, 2001.
[30] A. Smola, O. Mangasarian, and B. Schölkopf, "Sparse Kernel Feature Analysis," Technical Report 9904, Univ. of Wisconsin, Data Mining Inst., Madison, 1999.
[31] Z.K. Gon, J. Feng, and C. Fyfe, "A Comparison of Sparse Kernel Principal Component Analysis Methods," Proc. Int'l Conf. KnowledgeBased Intelligent Eng. Systems and Allied Technologies, R.J. Howlett and L.C. Jain, eds., pp. 309312, 2000.
[32] G.P. McCabe, "Principal Variables," Technometrics, vol. 26, pp. 137144, May 1984.
[33] H. Zou, T. Hastie, and R. Tibshirani, "Sparse Principal Component Analysis," J. Computational & Graphical Statistics, vol. 15, pp. 265286, June 2006.
[34] A. d'Aspremont, F.R. Bach, and L.E. Ghaoui, "Full Regularization Path for Sparse Principal Component Analysis," Proc. 24th Int'l Conf. Machine Learning, pp. 177184, 2007.
[35] B. Schölkopf, S. Mika, C.J.C. Burges, P. Knirsch, K.R. Müller, G. Rätsch, and A.J. Smola, "Input Space versus Feature Space in KernelBased Methods," IEEE Trans. Neural Networks, vol. 10, no. 5, pp. 10001017, Sept. 1999.
[36] L. Csató and M. Opper, "Sparse Online Gaussian Processes," Neural Computation, vol. 14, pp. 641668, 2002.
[37] M. Seeger, "Bayesian Gaussian Process Models: PACBayesian Generalisation Error Bounds and Sparse Approximations," PhD thesis, Inst. of Adaptive and Neural Computation, Univ. of Edinburgh, 2003.
[38] C. Burges, "Simplified Support Vector Decision Rules," Proc. 13th Int'l Conf. Machine Learning, pp. 7177, 1996.
[39] M. Wu, B. Schölkopf, and G. Bakir, "A Direct Method for Building Sparse Kernel Learning Algorithms," J. Machine Learning Research, vol. 7, pp. 603624, 2006.
[40] S. Agarwal, V.V. Saradhi, and H. Karnick, "KernelBased Online Machine Learning and Support Vector Reduction," Neurocomputing, vol. 71, nos. 79, pp. 12301237, 2008.
[41] T. Downs, K.E. Gates, and A. Masters, "Exact Simplification of Support Vector Solutions," J. Machine Learning Research, vol. 2, pp. 293297, 2001.
[42] E. ParradoHernández, I. MoraJiménez, J. ArenasGarcía, A.R. FigueirasVidal, and A. NaviaVázquez, "Growing Support Vector Classifiers with Controlled Complexity," Pattern Recognition, vol. 36, no. 7, pp. 14791488, 2003.
[43] S.S. Keerthi, O. Chapelle, and D. DeCoste, "Building Support Vector Machines with Reduced Classifier Complexity," J. Machine Learning Research, vol. 7, pp. 14931515, 2006.
[44] B. Schölkopf and A.J. Smola, Learning with Kernels. MIT Press, 2002.
[45] J.A. Tropp, A.C. Gilbert, S. Muthukrishnan, and M. Strauss, "Improved Sparse Approximation over QuasiIncoherent Dictionaries," Proc. Int'l Conf. Image Processing, vol. 1, pp. 3740, 2003.
[46] A.C. Gilbert, S. Muthukrishnan, and M.J. Strauss, "Approximation of Functions over Redundant Dictionaries Using Coherence," Proc. 14th ACMSIAM Symp. Discrete Algorithms, pp. 243252, 2003.
[47] J.A. Tropp, "Greed Is Good: Algorithmic Results for Sparse Approximation," IEEE Trans. Information Theory, vol. 50, no. 10, pp. 22312242, Oct. 2004.
[48] P. Honeine, C. Richard, and J.C.M. Bermudez, "OnLine Nonlinear Sparse Approximation of Functions," Proc. IEEE Int'l Symp. Information Theory, pp. 956960, June 2007.
[49] T.D. Sanger, "Optimal Unsupervised Learning in Feedforward Neural Networks," technical report, MIT, 1989.
[50] L.H. Chen and S. Chang, "An Adaptive Learning Algorithm for Principal Component Analysis," IEEE Trans. Neural Networks, vol. 6, no. 5, pp. 12551263, Sept. 1995.
[51] N.N. Schraudolph, S. Günter, and S.V.N. Vishwanathan, "Fast Iterative Kernel PCA," Advances in Neural Information Processing Systems, vol. 19, pp. 12251232, 2007.
[52] C. Darken, J. Chang, and J. Moody, "Learning Rate Schedules for Faster Stochastic Gradient Search," Proc. IEEE Workshop Neural Networks for Signal Processing, 1992.
[53] S. Günter, N.N. Schraudolph, and S.V.N. Vishwanathan, "Fast Iterative Kernel Principal Component Analysis," J. Machine Learning Research, vol. 8, pp. 18931918, Dec. 2007.
[54] J. Cadima and I. Jolliffe, "On Relationships between Uncentred and ColumnCentred Principal Component Analysis," Pakistan J. Statistics, vol. 25, no. 4, pp. 473503, 2009.
[55] P. Honeine and C. Richard, "Preimage Problem in KernelBased Machine Learning," IEEE Signal Processing Magazine, vol. 28, no. 2, pp. 7788, Mar. 2011.
[56] P. Honeine and C. Richard, "Solving the PreImage Problem in Kernel Machines: A Direct Method," Proc. 19th IEEE Workshop Machine Learning for Signal Processing, Sept. 2009.
[57] M. Kallas, P. Honeine, C. Richard, C. Francis, and H. Amoud, "NonNegative PreImage in Machine Learning for Pattern Recognition," Proc. 19th European Conf. Signal Processing, Aug./Sept. 2011.
[58] Y. Lecun and C. Cortes, "The MNIST Database of Handwritten Digits," http://yann.lecun.com/exdbmnist/, 1998.