Generalization Can Emerge in Tabular Foundation Models From a Single Table
By: Junwei Ma , Nour Shaheen , Alex Labach and more
Potential Business Impact:
Teaches computers to learn from just one example.
Deep tabular modelling increasingly relies on in-context learning where, during inference, a model receives a set of $(x,y)$ pairs as context and predicts labels for new inputs without weight updates. We challenge the prevailing view that broad generalization here requires pre-training on large synthetic corpora (e.g., TabPFN priors) or a large collection of real data (e.g., TabDPT training datasets), discovering that a relatively small amount of data suffices for generalization. We find that simple self-supervised pre-training on just a \emph{single} real table can produce surprisingly strong transfer across heterogeneous benchmarks. By systematically pre-training and evaluating on many diverse datasets, we analyze what aspects of the data are most important for building a Tabular Foundation Model (TFM) generalizing across domains. We then connect this to the pre-training procedure shared by most TFMs and show that the number and quality of \emph{tasks} one can construct from a dataset is key to downstream performance.
Similar Papers
Robust Tabular Foundation Models
Machine Learning (CS)
Makes AI better at learning from data.
Of Graphs and Tables: Zero-Shot Node Classification with Tabular Foundation Models
Machine Learning (CS)
Makes computers understand networks like tables.
Comparing Task-Agnostic Embedding Models for Tabular Data
Machine Learning (CS)
Makes computers understand data much faster.