Abstract: The long-standing dominance of gradient-boosted decision trees on tabular data is currently challenged by tabular foundation models using In-Context Learning (ICL): setting the training data as context for the test data and predicting in a single forward pass without parameter updates. While TabPFNv2 foundation model excels on tables with up to 10K samples, its alternating column- and row-wise attentions make handling large training sets computationally prohibitive. So, can ICL be effectively scaled and deliver a benefit for larger tables? We introduce TabICL, a tabular foundation model for classification, pretrained on synthetic datasets with up to 60K samples and capable of handling 500K samples on affordable resources. This is enabled by a novel two-stage architecture: a column-then-row attention mechanism to build fixed-dimensional embeddings of rows, followed by a transformer for efficient ICL. Across 200 classification datasets from the TALENT benchmark, TabICL is on par with TabPFNv2 while being systematically faster (up to 10 times), and significantly outperforms all other approaches. On 53 datasets with over 10K samples, TabICL surpasses both TabPFNv2 and CatBoost, demonstrating the potential of ICL for large data. Pretraining code, inference code, and pre-trained models are available at https://github.com/soda-inria/tabicl.
Lay Summary: Spreadsheets—tables of rows and columns—sit at the heart of healthcare records, bank ledgers and countless business logs. For years, the go-to tools for learning from such tables have been a family of models called gradient-boosted decision trees, which must be trained separately on every new table and may take hours to tune. Our research asks a simple question: can we build one reusable model that instantly adapts to any new table, even very large ones? We introduce TabICL, a model that first learns general patterns from millions of computer-generated tables and then, at test time, makes predictions by simply “reading” the new data—all in a single pass, with no extra training. By releasing the code and pretrained models, we make rapid, energy-efficient data analysis available to everyone.
Link To Code: https://github.com/soda-inria/tabicl
Primary Area: Deep Learning->Foundation Models
Keywords: Foundation Model, In-Context Learning, Tabular Data, Tabular Classification
Submission Number: 13277
Loading