SLM: End-to-end Feature Selection via Sparse Learnable Masks

TMLR Paper859 Authors

13 Feb 2023 (modified: 05 Apr 2023)Rejected by TMLREveryoneRevisionsBibTeX
Abstract: Feature selection has been widely used to alleviate compute requirements during training, elucidate model interpretability, and improve model generalizability. We propose SLM -- Sparse Learnable Masks -- a canonical approach for end-to-end feature selection that scales well with respect to both the feature dimension and the number of samples. At the heart of SLM lies a simple but effective learnable sparse mask, which learns which features to select, and gives rise to a novel objective that provably maximizes the mutual information (MI) between the selected features and the labels, which can be derived from a quadratic relaxation of mutual information from first principles. In addition, we derive a scaling mechanism that allows SLM to precisely control the number of features selected, through a novel use of sparsemax. This allows for more effective learning as demonstrated in ablation studies. Empirically, SLM achieves state-of-the-art results against a variety of competitive baselines on eight benchmark datasets, often by a significant margin, especially on those with real-world challenges such as class imbalance.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Gang_Niu1
Submission Number: 859
Loading