Issue No. 02 - February (2011 vol. 33)
DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/TPAMI.2010.71
Hamed Masnadi-Shirazi , University of California at San Diego, La Jolla
Nuno Vasconcelos , University of California at San Diego, La Jolla
A novel framework is proposed for the design of cost-sensitive boosting algorithms. The framework is based on the identification of two necessary conditions for optimal cost-sensitive learning that 1) expected losses must be minimized by optimal cost-sensitive decision rules and 2) empirical loss minimization must emphasize the neighborhood of the target cost-sensitive boundary. It is shown that these conditions enable the derivation of cost-sensitive losses that can be minimized by gradient descent, in the functional space of convex combinations of weak learners, to produce novel boosting algorithms. The proposed framework is applied to the derivation of cost-sensitive extensions of AdaBoost, RealBoost, and LogitBoost. Experimental evidence, with a synthetic problem, standard data sets, and the computer vision problems of face and car detection, is presented in support of the cost-sensitive optimality of the new algorithms. Their performance is also compared to those of various previous cost-sensitive boosting proposals, as well as the popular combination of large-margin classifiers and probability calibration. Cost-sensitive boosting is shown to consistently outperform all other methods.
Boosting, AdaBoost, cost-sensitive learning, asymmetric boosting.
N. Vasconcelos and H. Masnadi-Shirazi, "Cost-Sensitive Boosting," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 33, no. , pp. 294-309, 2010.