Keywords: GAMs, interpretability, tabular deep learning, glassbox, generalized additive models
TL;DR: GAMformer is the first tabular foundation model for GAMs, estimating shape functions in a single forward pass, performing well on real datasets despite training only on synthetic causal data.
Abstract: While interpretability is crucial for machine learning applications in safety-critical domains and regulatory compliance, existing tabular foundation models like TabPFN lack the transparency needed for these applications. Generalized Additive Models (GAMs) provide the needed interpretability through their additive structure, but traditional GAM methods rely on iterative learning algorithms (such as splines, boosted trees, or neural networks) that are fundamentally incompatible with the in-context learning paradigm of foundation models. In this paper, we introduce GAMformer, the first tabular foundation model for GAMs that bridges the gap between the power of foundation models and the interpretability requirements of real-world applications. GAMformer estimates GAM shape functions in a single forward pass using in-context learning, representing a significant departure from conventional iterative approaches. Building on previous research applying in-context learning to tabular data, we train GAMformer exclusively on synthetically generated tables. Our experiments demonstrate that GAMformer performs comparably to other leading GAMs across various classification benchmarks while maintaining full interpretability.
Primary Area: interpretability and explainable AI
Submission Number: 24768
Loading