Scaling Experiments in Self-Supervised Cross-Table Representation Learning

Published: 28 Oct 2023, Last Modified: 07 Nov 2023TRL @ NeurIPS 2023 PosterEveryoneRevisionsBibTeX
Keywords: Table, Representation Learning, Cross-Table, Self-Supervised Learning
TL;DR: We propose a novel approach for self-supervised cross-table pretraining and investigate its scaling behavior using a large curated pretraining corpus and small benchmark suite.
Abstract: To analyze the scaling potential of deep tabular representation learning models, we introduce a novel Transformer-based architecture specifically tailored to tabular data and cross-table representation learning by utilizing table-specific tokenizers and a shared Transformer backbone. Our training approach encompasses both single-table and cross-table models, trained via missing value imputation through a self-supervised masked cell recovery objective. To understand the scaling behavior of our method, we train models of varying sizes, ranging from approximately $10^4$ to $10^7$ parameters. These models are trained on a carefully curated pretraining dataset, consisting of 135 M training tokens sourced from 76 diverse datasets. We assess the scaling of our architecture in both single-table and cross-table pretraining setups by evaluating the pretrained models using linear probing on a curated set of benchmark datasets and comparing the results with conventional baselines.
Submission Number: 11
Loading