DistPFN: Test-Time Posterior Adjustment for Tabular Foundation Models under Label Shift

03 Sept 2025 (modified: 15 Nov 2025)ICLR 2026 Conference Withdrawn SubmissionEveryoneRevisionsBibTeXCC BY 4.0
Keywords: Tabular In-Context Learning, Tabular Foundation Model, Tabular Classification, Tabular Deep Learning
TL;DR: We propose DistPFN, a test-time posterior adjustment method for in-context tabular foundation models, which rescales predicted class probabilities by downweighting the influence of the training prior and emphasizing the model’s predicted posterior.
Abstract: TabPFN has recently gained attention as a foundation model for tabular datasets, achieving strong performance by leveraging in-context learning on synthetic data. However, we find that TabPFN is vulnerable to label shift, often overfitting to the majority class in the training distribution. To address this limitation, we propose DistPFN, the first test-time posterior adjustment method designed for in-context tabular foundation models. DistPFN rescales predicted class probabilities by downweighting the influence of the training prior (i.e., the class distribution of the context) and emphasizing the contribution of the model’s predicted posterior, without modifying the architecture or requiring additional training. We further introduce DistPFN-T, which incorporates temperature scaling to adaptively control the adjustment strength based on the discrepancy between prior and posterior. We evaluate our methods on over 250 OpenML datasets, demonstrating substantial improvements for various TabPFN-based models in classification tasks under label shift, while maintaining strong performance in standard settings without label shift.
Supplementary Material: zip
Primary Area: foundation or frontier models, including LLMs
Submission Number: 1407
Loading