The Community for Technology Leaders
RSS Icon
Subscribe
Issue No.04 - April (2005 vol.27)
pp: 603-618
ABSTRACT
Training a support vector machine on a data set of huge size with thousands of classes is a challenging problem. This paper proposes an efficient algorithm to solve this problem. The key idea is to introduce a parallel optimization step to quickly remove most of the nonsupport vectors, where block diagonal matrices are used to approximate the original kernel matrix so that the original problem can be split into hundreds of subproblems which can be solved more efficiently. In addition, some effective strategies such as kernel caching and efficient computation of kernel matrix are integrated to speed up the training process. Our analysis of the proposed algorithm shows that its time complexity grows linearly with the number of classes and size of the data set. In the experiments, many appealing properties of the proposed algorithm have been investigated and the results show that the proposed algorithm has a much better scaling capability than Libsvm, {\rm{SVM}}^{light}, and {\rm{SVMTorch}}. Moreover, the good generalization performances on several large databases have also been achieved.
INDEX TERMS
Support vector machines (SVMs), algorithm design and analysis, algorithm efficiency, machine learning, handwritten character recognition.
CITATION
Jian-xiong Dong, Adam Krzyzak, Ching Y. Suen, "Fast SVM Training Algorithm with Decomposition on Very Large Data Sets", IEEE Transactions on Pattern Analysis & Machine Intelligence, vol.27, no. 4, pp. 603-618, April 2005, doi:10.1109/TPAMI.2005.77
21 ms
(Ver 2.0)

Marketing Automation Platform Marketing Automation Tool