An efficient, provably optimal, practical algorithm for the 0-1 loss linear classification problem

ICLR 2026 Conference Submission4618 Authors

13 Sept 2025 (modified: 08 Oct 2025)ICLR 2026 Conference SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Classification, Global optimal algorithm, Hyperplane arrangement, Interpretable machine learning
TL;DR: Combinatorial and incidence relations between hyperplanes and data points, and an provably optimal algorithm for the 0-1 loss linear classification problem
Abstract: Algorithms for solving the linear classification problem have a long history, dating back at least to 1936 with linear discriminant analysis. For linearly separable data, many algorithms can obtain the exact solution to the corresponding 0-1 loss classification problem efficiently, but for data which is not linearly separable, it has been shown that this problem, in full generality, is NP-hard. Alternative approaches all involve approximations of some kind, such as the use of surrogates for the 0-1 loss (for example, the hinge or logistic loss), none of which can be guaranteed to solve the problem exactly. Finding an efficient, rigorously proven algorithm for obtaining an exact (i.e., globally optimal) solution to the 0-1 loss linear classification problem remains an open problem. By analyzing the combinatorial and incidence relations between hyperplanes and data points, we derive a rigorous construction algorithm, incremental cell enumeration (ICE), that can solve the 0-1 loss classification problem exactly in $O\left(N^{D}\right)$—exponential in the data dimension $D$. To the best of our knowledge, this is the first standalone algorithm—one that does not rely on general-purpose solvers—with rigorously proven guarantees for this problem. Moreover, we further generalize ICE to address the polynomial hypersurface classification problem in $O\left(N^{G}\right)$ time, where $G$is a parameter determined by both the data dimension $D$ and the polynomial degree $K$defining the hypersurface. The correctness of our algorithm is proved by using tools from the theory of hyperplane arrangements and oriented matroids. We demonstrate the effectiveness of our algorithm on real-world datasets, achieving optimal training accuracy for small scale datasets and higher test accuracy predictions on most the datasets within practical computational time. We further analyze the computational complexity of the ICE algorithm on synthetic datasets and show that its runtime aligns with our theoretical time complexity predictions.
Supplementary Material: zip
Primary Area: learning theory
Submission Number: 4618
Loading