The Community for Technology Leaders
Green Image
Kernel methods are becoming increasingly popular for various kinds of machine learning tasks, the most famous being the support vector machine (SVM) for classification. The SVM is well understood when using conditionally positive definite (cpd) kernel functions. However, in practice, non-cpd kernels arise and demand application in SVMs. The procedure of "plugging” these indefinite kernels in SVMs often yields good empirical classification results. However, they are hard to interpret due to missing geometrical and theoretical understanding. In this paper, we provide a step toward the comprehension of SVM classifiers in these situations. We give a geometric interpretation of SVMs with indefinite kernel functions. We show that such SVMs are optimal hyperplane classifiers not by margin maximization, but by minimization of distances between convex hulls in pseudo-Euclidean spaces. By this, we obtain a sound framework and motivation for indefinite SVMs. This interpretation is the basis for further theoretical analysis, e.g., investigating uniqueness, and for the derivation of practical guidelines like characterizing the suitability of indefinite SVMs.
Support vector machine, indefinite kernel, pseudo-Euclidean space, separation of convex hulls, pattern recognition.

B. Haasdonk, "Feature Space Interpretation of SVMs with Indefinite Kernels," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 482-492, 2005.
89 ms
(Ver 3.3 (11022016))