2013 IEEE 13th International Conference on Data Mining Workshops (2012)

Brussels, Belgium Belgium

Dec. 10, 2012 to Dec. 10, 2012

ISBN: 978-1-4673-5164-5

pp: 495-499

DOI Bookmark: http://doi.ieeecomputersociety.org/10.1109/ICDMW.2012.79

ABSTRACT

Based on L-2 Support Vector Machines(SVMs), Vapnik and Vashist introduced the concept of Learning Using Privileged Information(LUPI). This new paradigm takes into account the elements of human teaching during the process of machine learning. However, with the utilization of privileged information, the extended L-2 SVM model given by Vapnik and Vashist doubles the number of parameters used in the standard L-2 SVM. Lots of computing time would be spent on tuning parameters. In order to reduce this workload, we proposed using L-1 SVM instead of L-2 SVM for LUPI in our previous work. Different from LUPI with L-2 SVM, which is formulated as quadratic programming, LUPI with L-1 SVM is essentially a linear programming and is computationally much cheaper. On this basis, we discuss how to employ the wisdom from teachers better and more flexible by LUPI with L-1 SVM in this paper. By introducing kernels, an extended L-1 SVM model, which is still a linear programming, is proposed. With the help of nonlinear kernels, the new model allows the privileged information be explored in a transformed feature space instead of the original data domain. Numerical experiment is carried out on both time series prediction and digit recognition problems. Experimental results also validate the effectiveness of our new method.

INDEX TERMS

Support vector machines, Kernel, Training, Machine learning, Mathematical model, Computational modeling, Time series analysis, binary classification, 1-norm, support vector machine, kernel, privileged information

CITATION

Lingfeng Niu,
Jianmin Wu,
"Nonlinear L-1 Support Vector Machines for Learning Using Privileged Information",

*2013 IEEE 13th International Conference on Data Mining Workshops*, vol. 00, no. , pp. 495-499, 2012, doi:10.1109/ICDMW.2012.79