Score: 1

Towards Fair In-Context Learning with Tabular Foundation Models

Published: May 14, 2025 | arXiv ID: 2505.09503v3

By: Patrik Kenfack, Samira Ebrahimi Kahou, Ulrich Aïvodji

Potential Business Impact:

Makes AI fairer for everyone, not just some.

Business Areas:
Predictive Analytics Artificial Intelligence, Data and Analytics, Software

Transformer-based tabular foundation models have recently demonstrated promising in-context learning (ICL) performance on structured data, emerging as competitive alternatives to gradient-boosted trees. However, the fairness implications of this new paradigm remain largely unexplored. We present the first investigation of fairness in tabular ICL, evaluating three recently proposed foundation models -- TabPFNv2, TabICL, and TabDPT -- on multiple benchmark datasets. To mitigate biases, we explore three pre-processing fairness-enhancing methods: correlation removal (decorrelating input features from the sensitive attribute), group-balanced sample selection (ensuring equal representation of protected groups in context examples), and uncertainty-based sample selection (prioritizing context examples with high sensitive-attribute prediction uncertainty). Our experiments show that the uncertainty-based strategy consistently improves group fairness metrics (e.g., demographic parity, equalized odds, and equal opportunity) with minimal impact on predictive accuracy. We release our code to facilitate reproducibility (https://github.com/patrikken/Fair-TabICL)

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
30 pages

Category
Computer Science:
Machine Learning (CS)