Bridging performance gap between minimal and maximal SVM models

Published: 20 Apr 2023, Last Modified: 20 Apr 2023Accepted by TMLREveryoneRevisionsBibTeX
Abstract: Multi-class support vector machine (SVM) models are typically built using all possible pairs of binary SVM in a one-against-one fashion. This requires too much computation for datasets with hundreds or thousands of classes, which motivates the search for multi-class models that do not use all pairwise SVM. Our models correspond to the choice of the model graph, whose vertices correspond to classes and edges represent which pairwise SVMs are trained. We conduct experiments to uncover metrical and topological properties that impact the accuracy of a multi-class SVM model. Based on their results we propose a way to construct intermediate multi-class SVM models. The key insight is that for model graphs of diameter two, we can estimate missing pairwise probabilities from the known ones thus transforming the computation of posteriors to the usual complete (maximal) case. Our proposed algorithm allows one to reduce computational effort by 50-80% while keeping accuracy near, or even above that of a softmax classifier. In our work we use convolutional data sets, which have multiple advantages for benchmarking multi-class SVM models.
Submission Length: Regular submission (no more than 12 pages of main content)
Changes Since Last Submission: This is the camera ready version with added authors, affiliations, openreview link and acknowledgments. This submission also includes an introductory video.
Video: https://youtu.be/jWAfN4deoC8
Code: https://github.com/ondrej-such/svm3
Assigned Action Editor: ~Yiming_Ying1
License: Creative Commons Attribution 4.0 International (CC BY 4.0)
Submission Number: 491
Loading