Non-parametric classification via expand-and-sparsify representation

Published: 25 Sept 2024, Last Modified: 06 Nov 2024NeurIPS 2024 posterEveryoneRevisionsBibTeXCC BY 4.0
Keywords: non-parametric regression, non-parametric classification, expand-and-sparsify representation, universal consistency, minimax-optimal convergence rate
TL;DR: Propose algorithms for non-parametric classification via expansion-and-sparsify representation and prove that the convergence rate is minimax-optimal.
Abstract: In *expand-and-sparsify* (EaS) representation, a data point in $\mathcal{S}^{d-1}$ is first randomly mapped to higher dimension $\mathbb{R}^m$, where $m>d$, followed by a sparsification operation where the informative $k \ll m$ of the $m$ coordinates are set to one and the rest are set to zero. We propose two algorithms for non-parametric classification using such EaS representation. For our first algorithm, we use *winners-take-all* operation for the sparsification step and show that the proposed classifier admits the form of a locally weighted average classifier and establish its consistency via Stone's Theorem. Further, assuming that the conditional probability function $P(y=1|x)=\eta(x)$ is H\"{o}lder continuous and for optimal choice of $m$, we show that the convergence rate of this classifier is minimax-optimal. For our second algorithm, we use *empirical $k$-thresholding* operation for the sparsification step, and under the assumption that data lie on a low dimensional manifold of dimension $d_0\ll d$, we show that the convergence rate of this classifier depends only on $d_0$ and is again minimax-optimal. Empirical evaluations performed on real-world datasets corroborate our theoretical results.
Supplementary Material: zip
Primary Area: Other (please use sparingly, only use the keyword field for more details)
Submission Number: 12527
Loading