Keywords: Supervised classification, Minimax classification, High-dimensional learning, Efficient learningHigh-dimensional} data is common in multiple areas, such as health care and genomics, where the number of features can be hundreds of thousands. In such scenarios, the large number of features often lead to inefficient learning. Constraint generation methods have recently enabled efficient learning of L1-regularized \acp{SVM}. In this paper, we leverage such methods to obtain an efficient learning algorithm for the recently proposed \acp{MRC}. The proposed iterative algorithm also provides a sequence of worst-case error probabilities and performs feature selection. Experiments on multiple \mbox{high-dimensional} datasets show that the proposed algorithm is efficient in \mbox{high-dimensional} scenarios. In addition, the worst-case error probability provides useful information about the classifier performance, and the features selected by the algorithm are competitive with the state-of-the-art.
TL;DR: In this paper, we leverage constraint generation methods to obtain an efficient learning algorithm for the recently proposed minimax risk classifiers (MRCs).
Abstract: High-dimensional} data is common in multiple areas, such as health care and genomics, where the number of features can be hundreds of thousands. In such scenarios, the large number of features often lead to inefficient learning. Constraint generation methods have recently enabled efficient learning of L1-regularized support vector machines (SVM). In this paper, we leverage such methods to obtain an efficient learning algorithm for the recently proposed minimax risk classifiers (MRC). The proposed iterative algorithm also provides a sequence of worst-case error probabilities and performs feature selection. Experiments on multiple high-dimensional datasets show that the proposed algorithm is efficient in high-dimensional scenarios. In addition, the worst-case error probability provides useful information about the classifier performance, and the features selected by the algorithm are competitive with the state-of-the-art.
Other Supplementary Material: zip
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 2 code implementations](https://www.catalyzex.com/paper/efficient-learning-of-minimax-risk/code)
0 Replies
Loading