TL;DR: TabFlex: A linear attention-based model for scalable tabular classification, outperforming existing methods in speed and maintaining good performance on both small and large datasets.
Abstract: Leveraging the in-context learning (ICL) capability of Large Language Models (LLMs) for tabular classification has gained significant attention for its training-free adaptability across diverse datasets. Recent advancements, like TabPFN, excel in small-scale tabular datasets but struggle to scale for large and complex datasets. Our work enhances the efficiency and scalability of TabPFN for larger datasets by incorporating linear attention mechanisms as a scalable alternative to complexity-quadratic self-attention. Our model, TabFlex, efficiently handles tabular datasets with thousands of features and hundreds of classes, scaling seamlessly to millions of samples. For instance, TabFlex processes the poker-hand dataset with over a million samples in just 5 seconds. Our extensive evaluations demonstrate that TabFlex can achieve over a 2× speedup compared to TabPFN and a 1.5× speedup over XGBoost, outperforming 25 tested baselines in terms of efficiency across a diverse range of datasets. Furthermore, TabFlex remains highly effective on large-scale datasets, delivering strong performance with significantly reduced computational costs, especially when combined with data-efficient techniques such as dimensionality reduction and data sampling.
Lay Summary: Recently, a new way of using large language models (LLMs) has gained attention: giving them a few examples to help make predictions on table-based tasks—where the data looks like a spreadsheet (e.g., a CSV file), and the goal is to predict one column based on others. This method is fast and does not require training the model. However, it only works well with a small number of examples; too many can slow things down and use a lot of memory, because of how LLMs are built. In this paper, we explore different model designs, find a better solution, and introduce a new model that handles more data with much less memory and faster processing—without losing accuracy.
Application-Driven Machine Learning: This submission is on Application-Driven Machine Learning.
Link To Code: https://github.com/microsoft/ticl
Primary Area: Deep Learning->Other Representation Learning
Keywords: Tabular Classification, Transformer, State-Space Models, Linear Attention, Scalability
Submission Number: 6375
Loading