Unsupervised Discriminative Feature Selection With $\ell _{2,0}$ℓ2,0-Norm Constrained Sparse Projection

Published: 01 Jan 2025, Last Modified: 18 Oct 2025IEEE Trans. Pattern Anal. Mach. Intell. 2025EveryoneRevisionsBibTeXCC BY-SA 4.0
Abstract: Feature selection plays an important role in a wide spectrum of applications. Most of the sparsity-based feature selection methods tend to solve a relaxed $\ell _{2,p}$-norm ($0 < p \leq 1$) regularized problem, leading to the output of a sub-optimal feature subset and the heavy work of tuning regularization parameters. Optimizing the non-convex $\ell _{2,0}$-norm constrained problem is still an open question. Existing optimization algorithms used to solve the $\ell _{2,0}$-norm constrained problem require specific data distribution assumptions and cannot guarantee global convergence. In this article, we propose an unsupervised discriminative feature selection method with $\ell _{2,0}$-norm constrained sparse projection (SPDFS) to address the above issues. To this end, fuzzy membership learning and $\ell _{2,0}$-norm constrained projection learning are simultaneously performed to learn a feature-wise sparse projection for discriminative feature selection. More importantly, two optimization strategies are developed to optimize the proposed NP-hard problem. Specifically, a non-iterative algorithm with a globally optimal solution is derived for a special case, and an iterative algorithm with both rigorous ascent property and approximation guarantee is designed for the general case. Experimental results on both toy and real-world datasets demonstrate the superiority of the proposed method over some state-of-the-art methods in data clustering and text classification tasks.
Loading