Score: 1

Generalization Can Emerge in Tabular Foundation Models From a Single Table

Published: November 12, 2025 | arXiv ID: 2511.09665v1

By: Junwei Ma , Nour Shaheen , Alex Labach and more

Potential Business Impact:

Teaches computers to learn from just one example.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Deep tabular modelling increasingly relies on in-context learning where, during inference, a model receives a set of $(x,y)$ pairs as context and predicts labels for new inputs without weight updates. We challenge the prevailing view that broad generalization here requires pre-training on large synthetic corpora (e.g., TabPFN priors) or a large collection of real data (e.g., TabDPT training datasets), discovering that a relatively small amount of data suffices for generalization. We find that simple self-supervised pre-training on just a \emph{single} real table can produce surprisingly strong transfer across heterogeneous benchmarks. By systematically pre-training and evaluating on many diverse datasets, we analyze what aspects of the data are most important for building a Tabular Foundation Model (TFM) generalizing across domains. We then connect this to the pre-training procedure shared by most TFMs and show that the number and quality of \emph{tasks} one can construct from a dataset is key to downstream performance.

Page Count
9 pages

Category
Computer Science:
Machine Learning (CS)