Score: 2

Robust Tabular Foundation Models

Published: December 2, 2025 | arXiv ID: 2512.03307v1

By: Matthew Peroni, Franck Le, Vadim Sheinin

BigTech Affiliations: Massachusetts Institute of Technology IBM

Potential Business Impact:

Makes AI better at learning from data.

Business Areas:
A/B Testing Data and Analytics

The development of tabular foundation models (TFMs) has accelerated in recent years, showing strong potential to outperform traditional ML methods for structured data. A key finding is that TFMs can be pretrained entirely on synthetic datasets, opening opportunities to design data generators that encourage desirable model properties. Prior work has mainly focused on crafting high-quality priors over generators to improve overall pretraining performance. Our insight is that parameterizing the generator distribution enables an adversarial robustness perspective: during training, we can adapt the generator to emphasize datasets that are particularly challenging for the model. We formalize this by introducing an optimality gap measure, given by the difference between TFM performance and the best achievable performance as estimated by strong baselines such as XGBoost, CatBoost, and Random Forests. Building on this idea, we propose Robust Tabular Foundation Models (RTFM), a model-agnostic adversarial training framework. Applied to the TabPFN V2 classifier, RTFM improves benchmark performance, with up to a 6% increase in mean normalized AUC over the original TabPFN and other baseline algorithms, while requiring less than 100k additional synthetic datasets. These results highlight a promising new direction for targeted adversarial training and fine-tuning of TFMs using synthetic data alone.

Country of Origin
🇺🇸 United States

Page Count
10 pages

Category
Computer Science:
Machine Learning (CS)