CSDL Home IEEE Transactions on Pattern Analysis & Machine Intelligence 2012 vol.34 Issue No.03  March
Subscribe
Issue No.03  March (2012 vol.34)
pp: 480492
Andrea Vedaldi , Oxford University, Oxford
Andrew Zisserman , Oxford University, Oxford
ABSTRACT
Large scale nonlinear support vector machines (SVMs) can be approximated by linear ones using a suitable feature map. The linear SVMs are in general much faster to learn and evaluate (test) than the original nonlinear SVMs. This work introduces explicit feature maps for the additive class of kernels, such as the intersection, Hellinger's, and χ² kernels, commonly used in computer vision, and enables their use in large scale problems. In particular, we: 1) provide explicit feature maps for all additive homogeneous kernels along with closed form expression for all common kernels; 2) derive corresponding approximate finitedimensional feature maps based on a spectral analysis; and 3) quantify the error of the approximation, showing that the error is independent of the data dimension and decays exponentially fast with the approximation order for selected kernels such as χ². We demonstrate that the approximations have indistinguishable performance from the full kernels yet greatly reduce the train/test times of SVMs. We also compare with two other approximation methods: Nystrom's approximation of Perronnin et al. [1], which is data dependent, and the explicit map of Maji and Berg [2] for the intersection kernel, which, as in the case of our approximations, is data independent. The approximations are evaluated on a number of standard data sets, including Caltech101 [3], DaimlerChrysler pedestrians [4], and INRIA pedestrians [5].
INDEX TERMS
Kernel methods, feature map, large scale learning, object recognition, object detection.
CITATION
Andrea Vedaldi, Andrew Zisserman, "Efficient Additive Kernels via Explicit Feature Maps", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.34, no. 3, pp. 480492, March 2012, doi:10.1109/TPAMI.2011.153
REFERENCES
