The Community for Technology Leaders
Green Image
Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, {\rm{SVM}}^{light}, and {\rm{SVMTorch}}. Moreover, the good generalization performances on several large databases have also been achieved.
Support vector machines (SVMs), algorithm design and analysis, algorithm efficiency, machine learning, handwritten character recognition.

J. Dong, C. Y. Suen and A. Krzyzak, "Fast SVM Training Algorithm with Decomposition on Very Large Data Sets," in IEEE Transactions on Pattern Analysis & Machine Intelligence, vol. 27, no. , pp. 603-618, 2005.
87 ms
(Ver 3.3 (11022016))