What exactly has TabPFN learned to do?

Published: 16 Feb 2024, Last Modified: 28 Mar 2024BT@ICLR2024EveryoneRevisionsBibTeXCC BY 4.0
Keywords: tabular representation learning, meta-learning, prior-data fitted networks
Blogpost Url: https://iclr-blogposts.github.io/2024/blog/what-exactly-has-tabpfn-learned-to-do/
Abstract: TabPFN [Hollmann et al., 2023], a Transformer model pretrained to perform in-context learning on fresh tabular classification problems, was presented at the last ICLR conference. To better understand its behavior, we treat it as a black-box function approximator generator and observe its generated function approximations on a varied selection of training datasets. Exploring its learned inductive biases in this manner, we observe behavior that is at turns either brilliant or baffling. We conclude this post with thoughts on how these results might inform the development, evaluation, and application of prior-data fitted networks (PFNs) in the future.
Ref Papers: https://openreview.net/forum?id=cp5PvcI6w8_
Id Of The Authors Of The Papers: ~Calvin_McCarter1
Conflict Of Interest: N/A
Submission Number: 42
Loading