
This Article  
 
Share  
Bibliographic References  
Add to:  
Digg Furl Spurl Blink Simpy Del.icio.us Y!MyWeb  
Search  
 
ASCII Text  x  
Gang Wu, Edward Y. Chang, "KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution," IEEE Transactions on Knowledge and Data Engineering, vol. 17, no. 6, pp. 786795, June, 2005.  
BibTex  x  
@article{ 10.1109/TKDE.2005.95, author = {Gang Wu and Edward Y. Chang}, title = {KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution}, journal ={IEEE Transactions on Knowledge and Data Engineering}, volume = {17}, number = {6}, issn = {10414347}, year = {2005}, pages = {786795}, doi = {http://doi.ieeecomputersociety.org/10.1109/TKDE.2005.95}, publisher = {IEEE Computer Society}, address = {Los Alamitos, CA, USA}, }  
RefWorks Procite/RefMan/Endnote  x  
TY  JOUR JO  IEEE Transactions on Knowledge and Data Engineering TI  KBA: Kernel Boundary Alignment Considering Imbalanced Data Distribution IS  6 SN  10414347 SP786 EP795 EPD  786795 A1  Gang Wu, A1  Edward Y. Chang, PY  2005 KW  Imbalanceddata training KW  support vector machines KW  supervised classification. VL  17 JA  IEEE Transactions on Knowledge and Data Engineering ER   
[1] S. Amari and S. Wu, “Improving Support Vector Machine Classifiers by Modifying Kernel Functions,” Neural Networks, vol. 12, no. 6, pp. 783789, 1999.
[2] A.P. Bradley, “The Use of the Area under the Roc Curve in the Evaluation of Machine Learning Algorithms,” Pattern Recognition, vol. 30, no. 7, pp. 11451159, 1997.
[3] L. Breiman, “Bagging Predictors,” Machine Learning, vol. 24, no. 2, pp. 123140, 1996.
[4] C. Burges, “Geometry and Invariance in Kernel Based Methods,” Advances in Kernel Methods: Support Vector Learning, Cambridge, Mass.: MIT Press, 1999.
[5] C. Cardie and N. Howe, “Improving Minority Class Prediction Using CaseSpecific Feature Weights,” Proc. 14th Int'l Conf. Machine Learning, pp. 5765, 1997.
[6] P. Chan and S. Stolfo, “Learning with NonUniform Class and Cost Distributions: Effects and a Distributed MultiClassifier Approach,” Proc. Workshop Notes KDD98 Distributed Data Mining, pp. 19, 1998.
[7] N. Chawla, K. Bowyer, L. Hall, and W.P. Kegelmeyer, “Smote: Synthetic Minority OverSampling Technique,” J. Artificial Intelligence and Research, vol. 16, pp. 321357, 2002.
[8] N. Cristianini, J. ShaweTaylor, and J. Kandola, “On Kernel Target Alignment,” Proc. Neural Information Processing Systems, pp. 367373, 2001.
[9] T. Dietterich and G. Bakiri, “Solving Multiclass Learning Problems via ErrorCorrecting Output Codes,” J. Artifical Intelligence Research, vol. 2, pp. 263286, 1995.
[10] C. Drummond and R. Holte, “Exploiting the Cost (in)Sensitivity of Decision Tree Splitting Criteria,” Proc. 17th Int'l Conf. Machine Learning, pp. 239246, 2000.
[11] T. Fawcett and F. Provost, “Adaptive Fraud Detection,” Data Mining and Knowledge Discovery, vol. 1, no. 3, pp. 291316, 1997.
[12] K. Fukunaga, Introduction to Statistical Pattern Recognition, second ed. Boston: Academic Press, 1990.
[13] T. Hastie, R. Tibshirani, and J. Friedman, The Elements of Statistical Learning: Data Mining, Inference, and Prediction. New York: Springer, 2001.
[14] T. Joachims, “Text Categorization with Support Vector Machines: Learning with Many Relevant Features,” Proc. 10th European Conf. Machine Learning, pp. 137142, 1998.
[15] J. Kandola and J. ShaweTaylor, “Refining Kernels for Regression and Uneven Classification Problems,” Proc. Ninth Int'l Workshop Artificial Intelligence and Statistics, 2003.
[16] G. Karakoulas and J.S. Taylor, “Optimizing Classifiers for Imbalanced Training Sets,” Advances in Neural Information Processing Systems, 1999.
[17] M. Kubat and S. Matwin, “Addressing the Curse of Imbalanced Training Sets: OneSided Selection,” Proc. 14th Int'l Conf. Machine Learning, pp. 179186, 1997.
[18] H.W. Kuhn and A.W. Tucker, “NonLinear Programming,” Proc. Second Berkeley Symp. Math. Statistics and Probability, 1961.
[19] Y. Lin, Y. Lee, and G. Wahba, “Support Vector Machines for Classification in Nonstandard Situations,” Machine Learning, vol. 46, pp. 191202, 2002.
[20] A. Nugroho, S. Kuroyanagi, and A. Iwata, “A Solution for Imbalanced Training Sets Problem by Combnetii and Its Application on Fog Forecasting,” IEICE Trans. Information and Systems, vol. E85D, no. 7, pp. 11651174, July 2002.
[21] B. Scholkopf and A. Smola, Learning with Kernels: Support Vector Machines, Regularization, Optimization, and Beyond. Cambridge, Mass.: MIT Press, 2002.
[22] S. Tong and E. Chang, “Support Vector Machine Active Learning for Image Retrieval,” Proc. ACM Int'l Conf. Multimedia, pp. 107118, 2001.
[23] V. Vapnik, The Nature of Statistical Learning Theory. New York: Springer, 1995.
[24] K. Veropoulos, C. Campbell, and N. Cristianini, “Controlling the Sensitivity of Support Vector Machines,” Proc. Int'l Joint Conf. Artificial Intelligence, pp. 5560, 1999.
[25] G.M. Weiss, “Mining with Rarity: A Unifying Framework,” SIGKDD Explorations, vol. 6, no. 1, pp. 719, June 2004.
[26] G.M. Weiss and F. Provost, “Learning When Training Data Are Costly: The Effect of Class Distribution on Tree Induction,” J. Artificial Intelligence Research, vol. 19, pp. 315354, 2003.
[27] G. Wu and E. Chang, “Adaptive FeatureSpace Conformal Transformation for Imbalanced Data Learning,” Proc. 20th Int'l Conf. Machine Learning, pp. 816823, 2003.
[28] G. Wu, Y. Wu, L. Jiao, Y.F. Wang, and E. Chang, “MultiCamera SpatioTemporal Fusion and Biased SequenceData Learning for Security Surveillance,” Proc. ACM Int'l Conf. Multimedia, Nov. 2003.
[29] X. Wu and R. Srihari, “New $\nu{\hbox{}}{\rm{Support}}$ Vector Machines and Their Sequential Minimal Optimization,” Proc. 20th Int'l Conf. Machine Learning, 2003.
[30] X. Wu and R. Srihari, “Incorporating Prior Knowledge with Weighted Margin Support Vector Machines,” Proc. 10th ACM SIGKDD Int'l Conf. Knowledge Discovery and Data Mining, 2004.