LLM-Select: Feature Selection with Large Language Models

TMLR Paper2935 Authors

28 Jun 2024 (modified: 17 Sept 2024)Under review for TMLREveryoneRevisionsBibTeXCC BY 4.0
Abstract: In this paper, we demonstrate a surprising capability of large language models (LLMs): given only input feature names and a description of a prediction task, they are capable of selecting the most predictive features, with performance rivaling the standard tools of data science. Remarkably, these models exhibit this capacity across various query mechanisms. For example, we zero-shot prompt an LLM to output a numerical importance score for a feature (e.g., ``blood pressure'') in predicting an outcome of interest (e.g., ``heart failure''), with no additional context. In particular, we find that the latest models, such as GPT-4, can consistently identify the most predictive features regardless of the query mechanism and across various prompting strategies. We illustrate these findings through extensive experiments on real-world data, where we show that LLM-based feature selection consistently achieves strong performance competitive with data-driven methods such as the LASSO, despite never having looked at the downstream training data. Our findings suggest that LLMs may be useful not only for selecting the best features for training but also for deciding which features to collect \textit{in the first place}. This could potentially benefit practitioners in domains like healthcare, where collecting high-quality data comes at a high cost.
Submission Length: Regular submission (no more than 12 pages of main content)
Assigned Action Editor: ~Matthew_Blaschko1
Submission Number: 2935
Loading