Towards Fair In-Context Learning with Tabular Foundation Models
By: Patrik Kenfack, Samira Ebrahimi Kahou, Ulrich Aïvodji
Potential Business Impact:
Makes AI fairer for everyone, not just some.
Transformer-based tabular foundation models have recently demonstrated promising in-context learning (ICL) performance on structured data, emerging as competitive alternatives to gradient-boosted trees. However, the fairness implications of this new paradigm remain largely unexplored. We present the first investigation of fairness in tabular ICL, evaluating three recently proposed foundation models -- TabPFNv2, TabICL, and TabDPT -- on multiple benchmark datasets. To mitigate biases, we explore three pre-processing fairness-enhancing methods: correlation removal (decorrelating input features from the sensitive attribute), group-balanced sample selection (ensuring equal representation of protected groups in context examples), and uncertainty-based sample selection (prioritizing context examples with high sensitive-attribute prediction uncertainty). Our experiments show that the uncertainty-based strategy consistently improves group fairness metrics (e.g., demographic parity, equalized odds, and equal opportunity) with minimal impact on predictive accuracy. We release our code to facilitate reproducibility (https://github.com/patrikken/Fair-TabICL)
Similar Papers
On the Robustness of Tabular Foundation Models: Test-Time Attacks and In-Context Defenses
Machine Learning (CS)
Makes smart computer tables harder to trick.
FairPFN: A Tabular Foundation Model for Causal Fairness
Machine Learning (CS)
Fixes unfair computer decisions without knowing why.
In-Context Bias Propagation in LLM-Based Tabular Data Generation
Machine Learning (CS)
AI can accidentally create unfair data.