The Community for Technology Leaders
RSS Icon
Issue No.06 - June (2008 vol.30)
pp: 1041-1054
A common way to model multi-class classification problems is by means of Error-Correcting Output Codes (ECOC). Given a multi-class problem, the ECOC technique designs a codeword for each class, where each position of the code identifies the membership of the class for a given binary problem. A classification decision is obtained by assigning the label of the class with the closest code. One of the main requirements of the ECOC design is that the base classifier is capable of splitting each sub-group of classes from each binary problem. However, we can not guarantee that a linear classifier model convex regions. Furthermore, non-linear classifiers also fail to manage some type of surfaces. In this paper, we present a novel strategy to model multi-class classification problems using sub-class information in the ECOC framework. Complex problems are solved by splitting the original set of classes into sub-classes, and embedding the binary problems in a problem-dependent ECOC design. Experimental results show that the proposed splitting procedure yields a better performance when the class overlap or the distribution of the training objects conceil the decision boundaries for the base classifier. The results are even more significant when one has a sufficiently large training size.
Pattern Recognition, Machine learning, Statistical Models, Pattern Recognition, Computing Methodologies, Classifier design and evaluation, Design Methodology, Pattern Recognition, Computing Methodologies
Sergio Escalera, David M.J. Tax, Oriol Pujol, Petia Radeva, Robert P.W. Duin, "Subclass Problem-Dependent Design for Error-Correcting Output Codes", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.30, no. 6, pp. 1041-1054, June 2008, doi:10.1109/TPAMI.2008.38
[1] E. Allwein, R. Schapire, and Y. Singer, “Reducing Multiclass to Binary: A Unifying Approach for Margin Classifiers,” J. Machine Learning Research, vol. 1, pp. 113-141, 2002.
[2] A. Asuncion and D. Newman, UCI Machine Learning Repository. School of Information and Computer Sciences, Univ. of California, Irvine, 2007.
[3] J. Casacuberta, J. Miranda, M. Pla, S. Sanchez, A. Serra, and J. Talaya, “On the Accuracy and Performance of the Geomobil System,” Proc. 20th Congress Int'l Soc. for Photogrammetry and Remote Sensing, 2004.
[4] K. Crammer and Y. Singer, “On the Learnability and Design of Output Codes for Multi-Class Problems,” Machine Learning, vol. 47, pp. 201-233, 2002.
[5] H. Daume and D. Marcu, “A Bayesian Model for Supervised Clustering with the Dirichlet Process Prior,” J. Machine Learning Research, pp. 1551-1577, 2005.
[6] J. Demsar, “Statistical Comparisons of Classifiers over Multiple Data Sets,” J. Machine Learning Research, pp. 1-30, 2006.
[7] T. Dietterich and G. Bakiri, “Solving Multiclass Learning Problems via Error-Correcting Output Codes,” J. Artificial Intelligence Research, vol. 2, pp. 263-282, 1995.
[8] S. Escalera, O. Pujol, and P. Radeva, “Loss-Weighted Decoding for Error-Correcting Output Codes,” Proc. Int'l Conf. Computer Vision Theory and Applications, vol. 2, pp. 117-122, 2008.
[9] S. Escalera, O. Pujol, and P. Radeva, “Boosted Landmarks of Contextual Descriptors and Forest-ECOC: A Novel Framework to Detect and Classify Objects in Clutter Scenes,” Pattern Recognition Letters, vol. 28, no. 13, pp. 1759-1768, 2007.
[10] T.N. Faculty of Applied Physics, Delft Univ. of Technology, http:/, 2008.
[11] J. Friedman, T. Hastie, and R. Tibshirani, “Additive Logistic Regression: A Statistical View of Boosting,” The Annals of Statistics, vol. 38, pp. 337-374, 1998.
[12] R. Ghani, “Combining Labeled and Unlabeled Data for Text Classification with a Large Number of Categories,” Proc. IEEE Int'l Conf. Data Mining, pp. 597-598, 2001.
[13] T. Hastie and R. Tibshirani, “Classification by Pairwise Grouping,” Proc. Neural Information Processing Systems Conf., vol. 26, pp.451-471, 1998.
[14] J. Kapur and H. Kesavan, Entropy Optimization Principles with Applications. Academic Press, 1992.
[15] J. Kittler, R. Ghaderi, T. Windeatt, and J. Matas, “Face Verification Using Error Correcting Output Codes,” Proc. Conf. Computer Vision and Pattern Recognition, vol. 1, pp. 755-760, 2001.
[16] E.B. Kong and T.G. Dietterich, “Error-Correcting Output Coding Corrects Bias and Variance,” Proc. 12th Int'l Conf. Machine Learning, pp. 313-321, 1995.
[17] N.J. Nilsson, Learning Machines. McGraw-Hill, 1965.
[18] OSU-SVM-TOOLBOX,, 2008.
[19] P. Pudil, F. Ferri, J. Novovicova, and J. Kittler, “Floating Search Methods for Feature Selection with Nonmonotonic Criterion Functions,” Proc. Int'l Conf. Pattern Recognition, pp. 279-283, 1994.
[20] O. Pujol, S. Escalera, and P. Radeva, “An Incremental Node Embedding Technique for Error Correcting Output Codes,” Pattern Recognition, submitted for publication.
[21] O. Pujol, P. Radeva, and J. Vitrià, “Discriminant ECOC: A Heuristic Method for Application Dependent Design of Error Correcting Output Codes,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 6, pp. 1001-1007, June 2006.
[22] J. Shi and J. Malik, “Normalized Cuts and Image Segmentation,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 22, no. 8, pp. 888-905, Aug. 2000.
[23] J. Shi and J. Malik, “Feature Extraction by Non-Parametric Mutual Information Maximization,” J. Machine Learning Research, pp. 1415-1438, 2003.
[24] W. Utschick and W. Weichselberger, “Stochastic Organization of Output Codes in Multiclass Learning Problems,” Neural Computation, vol. 13, no. 5, pp. 1065-1102, 2001.
[25] T. Windeatt and G. Ardeshir, “Boosted ECOC Ensembles for Face Recognition,” Proc. Int'l Conf. Visual Information Eng., pp. 165-168, 2003.
[26] Q. Zgu, “Minimum Cross-Entropy Approximation for Modeling of Highly Intertwining Data Sets at Subclass Levels,” J. Intelligent Information Systems, pp. 139-152, 1998.
[27] J. Zhou and C. Suen, “Unconstrained Numeral Pair Recognition Using Enhanced Error Correcting Output Coding: A Holistic Approach,” Proc. Eighth Int'l Conf. Document Analysis and Recognition, vol. 1, pp. 484-488, 2005.
[28] M. Zhu and A.M. Martinez, “Subclass Discriminant Analysis,” IEEE Trans. Pattern Analysis and Machine Intelligence, vol. 28, no. 8, pp. 1274-1286, Aug. 2006.
22 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool