
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
David C. Hoyle, "Accuracy of PseudoInverse Covariance Learning—A Random Matrix Theory Analysis," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 33, no. 7, pp. 14701481, July, 2011.  
BibTex  x  
@article{ 10.1109/TPAMI.2010.186, author = {David C. Hoyle}, title = {Accuracy of PseudoInverse Covariance Learning—A Random Matrix Theory Analysis}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {33}, number = {7}, issn = {01628828}, year = {2011}, pages = {14701481}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2010.186}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Accuracy of PseudoInverse Covariance Learning—A Random Matrix Theory Analysis IS  7 SN  01628828 SP1470 EP1481 EPD  14701481 A1  David C. Hoyle, PY  2011 KW  Pseudoinverse KW  linear discriminants KW  peaking phenomenon KW  random matrix theory KW  bagging KW  random subspace method. VL  33 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] C.M. Bishop, Neural Networks for Pattern Recognition. Oxford Univ. Press, 1995.
[2] T. Golub, D. Slonim, P. Tamayo, C. Huard, M. Gaasenbeek, J. Mesirov, H. Coller, M. Loh, J. Downing, M. Caligiuri, C. Bloomfield, and E. Lander, "Molecular Classification of Cancer: Class Discovery and Class Prediction by Gene Expression Monitoring," Science, vol. 286, pp. 531537, 1999.
[3] A. Price, N. Patterson, R. Plenge, M. Weinblatt, N. Shadick, and D. Reich, "Principal Components Analysis Corrects for Stratification in GenomeWide Association Studies," Nature Genetics, vol. 38, pp. 904909, 2006.
[4] E.H. Moore, "On the Reciprocal of the General Algebraic Matrix," Bull. Am. Math. Soc., vol. 26, pp. 394395, 1920.
[5] R. Penrose, "A Generalized Inverse for Matrices," Proc. Cambridge Philosophical Soc., vol. 51, pp. 406413, 1955.
[6] S. Raudys and R. Duin, "Expected Classification Error of the Fisher Linear Classifier with PseudoInverse Covariance Matrix," Pattern Recognition Letters, vol. 19, pp. 385392, 1998.
[7] W. Krzanowski, P. Jonathan, W. McCarthy, and M. Thomas, "Discriminant Analysis with Singular Covariance Matrices: Methods and Applications to Spectroscopic Data," Applied Statistics, vol. 44, pp. 101115, 1995.
[8] J. Schäfer and K. Strimmer, "An Empirical Bayes Approach to Inferring Large Scale Gene Association Networks," Bioinformatics, vol. 21, pp. 754764, 2005.
[9] R.A. Horn and C.R. Johnson, Matrix Analysis. Cambridge Univ. Press, 1985.
[10] Y. Le Cun, I. Kanter, and S. Solla, "Eigenvalues of Covariance Matrices: Application to NeuralNetwork Learning," Physical Rev. Letters, vol. 66, pp. 23962399, 1991.
[11] A. Krogh and J. Hertz, "Generalization in a Linear Perceptron in the Presence of Noise," J. Physics A: Math. and General, vol. 25, pp. 11351147, 1992.
[12] L. Hansen, "Stochastic Linear Learning: Exact Test and Training Error Averages," Neural Networks, vol. 6, pp. 393396, 1993.
[13] D. Barber, D. Saad, and P. Sollich, "FiniteSize Effects and Optimal Test Set Size in Linear Perceptrons," J. Physics A: Math. and General, vol. 28, pp. 13251334, 1995.
[14] J. von Neumann, "Some MatrixInequalities and Metrization of MatrixSpace," Tomsk Univ. Rev., vol. 1, pp. 286300, 1937.
[15] L. Mirsky, "A Trace Inequality of John von Neumann," Monatshefte für Mathematik, vol. 79, pp. 303306, 1975.
[16] J. Lasserre, "A Trace Inequality for Matrix Product," IEEE Trans. Automatic Control, vol. 40, no. 8, pp. 15001501, Aug. 1995.
[17] G.H. Hardy, J.E. Littlewood, and G. Pólya, Inequalities, second ed. Cambridge Univ. Press, 1988.
[18] W. Young, "On the Multiplication of Successions of Fourier Constants," Proc. Royal Soc. Series A, vol. 87, pp. 331339, 1912.
[19] I. Johnstone, "High Dimensional Statistical Inference and Random Matrices," Proc. Int'l Congress of Mathematicians, M. SanzSolé, J. Soria, J. Varona, and J. Verdera, eds., 2006.
[20] D. Hoyle and M. Rattray, "Statistical Mechanics of Learning Multiple Orthogonal Signals: Asymptotic Theory and Fluctuation Effects," Physical Rev. E, vol. 75, 2007, doi: 10.1103/PhysRevE. 75.016101.
[21] V.A. Marčenko and L.A. Pastur, "Distribution of Eigenvalues for Some Sets of Random Matrices," Math. USSRSb, vol. 1, pp. 457483, 1967.
[22] K. Wachter, "The Strong Limits of Random Matrix Spectra for Sample Matrices of Independent Elements," Annnals of Probability, vol. 6, pp. 118, 1978.
[23] A.M. Sengupta and P.P. Mitra, "Distributions of Singular Values for Some Random Matrices," Physical Rev. E, vol. 60, pp. 33893392, 1999.
[24] D. Hoyle and M. Rattray, "A Statistical Mechanics Analysis of Gram Matrix Eigenvalue Spectra," Proc. Conf. Learning Theory, J. ShaweTaylor and Y. Singer, eds., 2004.
[25] L. Breiman, "Bagging Predictors," Machine Learning, vol. 24, pp. 123140, 1996.
[26] G. Stewart, "The Efficient Generation of Random Orthogonal Matrices with an Application to Condition Estimators," SIAM J. Numerical Analysis, vol. 17, pp. 403409, 1980.
[27] I. Guyon, J. Li, T. Mader, P.A. Pletscher, G. Schneider, and M. Uhr, "Competitive Baseline Methods Set New Standards for the NIPS 2003 Feature Selection Benchmark," Pattern Recognition letters, vol. 28, pp. 14381444, 2007.
[28] D. Hoyle, M. Rattray, R. Jupp, and A. Brass, "Making Sense of Microarray Data Distributions," Bioinformatics, vol. 18, pp. 576584, 2002.
[29] B. Scholköpf, A. Smola, and K.R. Müller, "Nonlinear Component Analysis as a Kernel Eigenvalue Problem," Neural Computation, vol. 10, pp. 12991319, 1998.
[30] I.M. Johnstone, "On the Distribution of the Largest Eigenvalue in Principal Components Analysis," Annals of Statistics, vol. 29, pp. 295327, 2001.
[31] J. Baik, G. Ben Arous, and S. Peche, "Phase Transition of the Largest Eigenvalue for NonNull Complex Sample Covariance Matrices," Annals of Probability, vol. 33, pp. 16431697, 2005.
[32] J. Baik and J. Silverstein, "Eigenvalues of Large Sample Covariance Matrices of Spiked Population Models," J. Multivariate Analysis, vol. 97, pp. 13821408, 2006.
[33] D. Paul, "Asymptotics of Sample Eigenstruture for a Large Dimensional Spiked Covariance Model," Statistica Sinica, vol. 17, pp. 16171642, 2007.
[34] D.C. Hoyle and M. Rattray, "PCA Learning for Sparse HighDimensional Data," Europhysics Letters, vol. 62, pp. 117123, 2003.
[35] D. Hoyle and M. Rattray, "PrincipalComponentAnalysis Eigenvalue Spectra from Data with SymmetryBreaking Structure," Physical Rev. E, vol. 69, 2004, doi: 10.1103/PhysRevE.69.026124.
[36] M. Tipping and C. Bishop, "Probabilistic Principal Component Analysis," J. Royal Statistical Soc. B, vol. 61, pp. 611622, 1999.
[37] T. Minka, "Automatic Choice of Dimensionality for PCA," Advances in Neural Information Processing Systems, T. Leen, T. Dietterich, and V. Tresp, eds., pp. 598604, MIT Press, 2001.
[38] D. Hoyle, "Automatic PCA Dimension Selection for High Dimensional Data and Small Sample Sizes," J. Machine Learning Research, vol. 9, pp. 27332759, 2008.
[39] J. Schäfer and K. Strimmer, "A Shrinkage Approach to LargeScale Covariance Matrix Estimation and Implications for Functional Genomics," Statistical Applications in Genetics and Molecular Biology, vol. 4, 2005.
[40] P. Bickel and E. Levina, "Regularized Estimation of Large Covariance Matrices," Annals of Statistics, vol. 36, pp. 199227, 2008.