Greedy Information Maximization for Online Feature SelectionDownload PDF

Published: 01 Feb 2023, Last Modified: 13 Feb 2023Submitted to ICLR 2023Readers: Everyone
Keywords: Online learning, feature selection, greedy optimization, mutual information
TL;DR: A greedy procedure for performing online feature selection by maximizing mutual information
Abstract: Feature selection is commonly used to reduce feature acquisition costs, but the standard approach is to train models with static feature subsets. Here, we consider the online feature selection problem, where the model can adaptively query features based on the presently available information. Online feature selection has mainly been viewed as a reinforcement learning problem, but we propose a simpler approach of greedily selecting features that maximize mutual information with the response variable. This intuitive idea is difficult to implement without perfect knowledge of the joint data distribution, so we propose a deep learning approach that recovers the greedy procedure when perfectly optimized. We apply our approach to numerous datasets and observe better performance than both RL-based and offline feature selection methods
Anonymous Url: I certify that there is no URL (e.g., github page) that could be used to find authors’ identity.
No Acknowledgement Section: I certify that there is no acknowledgement section in this submission for double blind review.
Code Of Ethics: I acknowledge that I and all co-authors of this work have read and commit to adhering to the ICLR Code of Ethics
Submission Guidelines: Yes
Please Choose The Closest Area That Your Submission Falls Into: Deep Learning and representational learning
Supplementary Material: zip
18 Replies

Loading