FairPFN: A Tabular Foundation Model for Causal Fairness
By: Jake Robertson , Noah Hollmann , Samuel Müller and more
Potential Business Impact:
Fixes unfair computer decisions without knowing why.
Machine learning (ML) systems are utilized in critical sectors, such as healthcare, law enforcement, and finance. However, these systems are often trained on historical data that contains demographic biases, leading to ML decisions that perpetuate or exacerbate existing social inequalities. Causal fairness provides a transparent, human-in-the-loop framework to mitigate algorithmic discrimination, aligning closely with legal doctrines of direct and indirect discrimination. However, current causal fairness frameworks hold a key limitation in that they assume prior knowledge of the correct causal model, restricting their applicability in complex fairness scenarios where causal models are unknown or difficult to identify. To bridge this gap, we propose FairPFN, a tabular foundation model pre-trained on synthetic causal fairness data to identify and mitigate the causal effects of protected attributes in its predictions. FairPFN's key contribution is that it requires no knowledge of the causal model and still demonstrates strong performance in identifying and removing protected causal effects across a diverse set of hand-crafted and real-world scenarios relative to robust baseline methods. FairPFN paves the way for promising future research, making causal fairness more accessible to a wider variety of complex fairness problems.
Similar Papers
Foundation Models for Causal Inference via Prior-Data Fitted Networks
Machine Learning (CS)
Helps computers understand cause and effect.
Towards Fair In-Context Learning with Tabular Foundation Models
Machine Learning (CS)
Makes AI fairer for everyone, not just some.
Do-PFN: In-Context Learning for Causal Effect Estimation
Machine Learning (CS)
Finds cause and effect without knowing all the rules.