Exploring Fine-Tuning for Tabular Foundation Models
By: Aditya Tanna , Pratinav Seth , Mohamed Bouadi and more
Tabular Foundation Models (TFMs) have recently shown strong in-context learning capabilities on structured data, achieving zero-shot performance comparable to traditional machine learning methods. We find that zero-shot TFMs already achieve strong performance, while the benefits of fine-tuning are highly model and data-dependent. Meta-learning and PEFT provide moderate gains under specific conditions, whereas full supervised fine-tuning (SFT) often reduces accuracy or calibration quality. This work presents the first comprehensive study of fine-tuning in TFMs across benchmarks including TALENT, OpenML-CC18, and TabZilla. We compare Zero-Shot, Meta-Learning, Supervised (SFT), and parameter-efficient (PEFT) approaches, analyzing how dataset factors such as imbalance, size, and dimensionality affect outcomes. Our findings cover performance, calibration, and fairness, offering practical guidelines on when fine-tuning is most beneficial and its limitations.
Similar Papers
Parameter-Efficient Fine-Tuning for Foundation Models
Computation and Language
Makes smart computer programs learn faster, cheaper.
Massive Supervised Fine-tuning Experiments Reveal How Data, Layer, and Training Factors Shape LLM Alignment Quality
Computation and Language
Makes AI better at following instructions.
Parameter-Efficient Continual Fine-Tuning: A Survey
Machine Learning (CS)
AI learns new things without forgetting old ones.