
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
KarAnn Toh, HowLung Eng, "Between ClassificationError Approximation and Weighted LeastSquares Learning," IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 30, no. 4, pp. 658669, April, 2008.  
BibTex  x  
@article{ 10.1109/TPAMI.2007.70730, author = {KarAnn Toh and HowLung Eng}, title = {Between ClassificationError Approximation and Weighted LeastSquares Learning}, journal ={IEEE Transactions on Pattern Analysis and Machine Intelligence}, volume = {30}, number = {4}, issn = {01628828}, year = {2008}, pages = {658669}, doi = {http://doi.ieeecomputersociety.org/10.1109/TPAMI.2007.70730}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Pattern Analysis and Machine Intelligence TI  Between ClassificationError Approximation and Weighted LeastSquares Learning IS  4 SN  01628828 SP658 EP669 EPD  658669 A1  KarAnn Toh, A1  HowLung Eng, PY  2008 KW  Pattern Classification KW  Classification Error Rate KW  Discriminant Functions KW  Polynomials andMachine Learning VL  30 JA  IEEE Transactions on Pattern Analysis and Machine Intelligence ER   
[1] R.O. Duda, P.E. Hart, and D.G. Stork, Pattern Classification, second ed. John Wiley & Sons, 2001.
[2] J. Schürmann, Pattern Classification: A Unified View of Statistical and Neural Approaches. John Wiley & Sons, 1996.
[3] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2001.
[4] T. Poggio, R. Rifkin, S. Mukherjee, and P. Niyogi, “General Conditions for Predictivity in Learning Theory,” Nature, vol. 428, pp. 419422, Mar. 2004.
[5] G. Baudat and F. Anouar, “Generalized Discriminant Analysis Using a Kernel Approach,” Neural Computation, vol. 12, pp. 23852404, 2000.
[6] J. Lu, K.N. Plataniotis, and A.N. Venetsanopoulos, “Face Recognition Using Kernel Direct Discriminant Analysis Algorithms,” IEEE Trans. Neural Networks, vol. 14, no. 1, pp. 117126, 2003.
[7] B.E. Boser, I.M. Guyon, and V.N. Vapnik, “A Training Algorithm for Optimal Margin Classifiers,” Proc. Fifth Ann. Workshop Computational Learning Theory, pp. 144152, 1992.
[8] V.N. Vapnik, Statistical Learning Theory. WileyInterscience, 1998.
[9] E.E. Osuna, R. Freund, and F. Girosi, “Support Vector Machines: Training and Applications,” Technical Report: A.I. Memo No. 1602, C.B.C.L. Paper No. 144, MIT Artificial Intelligence Laboratory and CBCL Dept. of Brain and Cognitive Sciences, 1997.
[10] C.J.C. Burges, “A Tutorial on Support Vector Machines for Pattern Recognition,” Data Mining and Knowledge Discovery, vol. 2, no. 2, pp. 121167, 1998.
[11] E. Jones, P. Runkle, N. Dasgupta, L. Couchman, and L. Carin, “Genetic Algorithm Wavelet Design for Signal Classification,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 23, no. 8, pp. 890895, Aug. 2001.
[12] N.S. Kim and S.S. Park, “Discriminative Training for Concatenative Speech Synthesis,” IEEE Signal Processing Letters, vol. 11, no. 1, pp. 4043, 2004.
[13] M. Rimer and T. Martinez, “ClassificationBased Objective Functions,” Machine Learning, vol. 63, no. 2, pp. 183205, 2006.
[14] B.E. Boser, I.M. Guyon, and V.N. Vapnik, “A Training Algorithm for Optimal Margin Classifier,” Proc. Fifth ACM Workshop Computational Learning Theory, pp. 144152, 1992.
[15] T. Poggio and F. Girosi, “Networks for Approximation and Learning,” Proc. IEEE, vol. 78, no. 9, pp. 14811497, 1990.
[16] B. Schölkopf and A.J. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. MIT Press, 2002.
[17] K.A. Toh, Q.L. Tran, and D. Srinivasan, “Benchmarking a Reduced Multivariate Polynomial Pattern Classifier,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 26, no. 6, pp. 740755, June 2004.
[18] K.A. Toh, “Learning from Target Knowledge Approximation,” Proc. First IEEE Conf. Industrial Electronics and Applications, pp. 815822, May 2006.
[19] G.J. Gordon, “${\rm Generalized}^{2}\;{\rm Linear}^{2}$ Models,” Proc. Advances in Neural Information Processing Systems (NIPS '02), pp. 577584, Dec. 2002.
[20] P. McCullagh and J.A. Nelder, Generalized Linear Models, second ed. Chapman and Hall, 1989.
[21] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. Springer, 2001.
[22] N.R. Draper and H. Smith, Applied Regression Analysis. John Wiley & Sons, 1998.
[23] Y. Freund and R.E. Schapire, “A Short Introduction to Boosting,” J. Japanese Soc. Artificial Intelligence, no. 5, pp. 771780, Sept. 1999.
[24] Y. Freund, “Boosting a Weak Learning Algorithm by Majority,” Information and Computation, vol. 121, pp. 256285, 1995.
[25] K.A. Toh, “Training a ReciprocalSigmoid Classifier by Feature ScalingSpace,” Machine Learning, vol. 65, no. 1, pp. 273308, 2006.
[26] D.J. Newman, S. Hettich, C.L. Blake, and C.J. Merz, “UCI Repository of Machine Learning Databases,” Univ. of California, Dept. of Information and Computer Sciences, http://www.ics. uci.edu/~mlearnMLRepository.html , 1998.
[27] T.S. Lim, W.Y. Loh, and Y.S. Shil, “A Comparison of Prediction Accuracy, Complexity, and Training Time of ThirtyThree Old and New Classification Algorithms,” Machine Learning, vol. 40, no. 3, pp. 203228, 2000.
[28] J. Li, G. Dong, K. Ramamohanarao, and L. Wong, “DeEPs: A New InstanceBased Lazy Discovery and Classification System,” Machine Learning, vol. 54, no. 2, pp. 99124, 2004.