Open Peer Review. Open Publishing. Open Access. Open Discussion. Open Directory. Open Recommendations. Open API. Open Source.
An Interpretable SVM Classifier by Discrete-Weight Optimization
Fethi Jarray, Sabri Boughorbel
Jun 14, 2017 (modified: Jun 19, 2017)ICML 2017 WHI Submissionreaders: everyone
Abstract:The main problem investigated in this paper is to learn interpretable linear classifiers from data. Interpretable models are captured using "discrete" linear functions. The learning problem is formulated as minimizing the cumulative zero-one loss of a discrete hyperplane, penalized by the standard L2 regularizer. This learning task is cast as a MILP problem, and solved using convex relaxation and rounding. Experiments on both synthetic and real-world datasets corroborate the interest of this approach. We benchmarked the proposed method against two classifiers: i- DILSVM a discrete version of SVM based a hinge-loss and ii- the traditional linear L1-norm SVM. Our algorithm outperforms DILSVM on several datasets in terms of accuracy and computational efficiency. It has close performance to the continuous SVM. These results suggest that the proposed classifier provides a good trade-off between performance and interpretability.