TabRet: Pre-training Transformer-based Tabular Models for Unseen ColumnsDownload PDF

Published: 04 Mar 2023, Last Modified: 14 Apr 2024ME-FoMo 2023 PosterReaders: Everyone
Keywords: pre-training, tabular data, transfer learning, Transformer, masked autoencoder
TL;DR: TabRet is a pre-trainable Transformer that excels at adapting to new columns in tabular downstream tasks.
Abstract: We present TabRet, a pre-trainable Transformer-based model for tabular data. TabRet is designed to work on a downstream task that contains columns not seen in pre-training. Unlike other methods, TabRet has an extra learning step before fine-tuning called retokenizing, which calibrates feature embeddings based on the masked autoencoding loss. In experiments, we pre-trained TabRet with a large collection of public health surveys and fine-tuned it on classification tasks in healthcare, and TabRet achieved the best AUC performance on four datasets. In addition, an ablation study shows retokenizing and random shuffle augmentation of columns during pre-training contributed to performance gains. The code is available at https://github.com/pfnet-research/tabret.
Community Implementations: [![CatalyzeX](/images/catalyzex_icon.svg) 1 code implementation](https://www.catalyzex.com/paper/arxiv:2303.15747/code)
0 Replies

Loading