Abstract: Linear support vector machine (SVM) is a popular tool in machine learning. Compared with nonlinear SVM, linear SVM produce competent performances, and is more efficient in tacking larg-scale and high dimensional tasks. In order to speed up its training, various algorithms have been developed, such as Liblinear, SVM-perf and Pegasos. In this paper, we propose a new fast algorithm for linear SVMs. This algorithm uses the stochastic sequence minimization optimization (SSMO) method. There are two main differences between our algorithm and other linear SVM algorithms. Our algorithm updates two variables, simultaneously, rather than updating a single variable. We maintain the bias term b in discriminant functions. Experiments indicate that the proposed algorithm is much faster than some state of the art solvers, such as Liblinear, and achieves higher classification accuracy.
Loading