Score: 1

Causal Manifold Fairness: Enforcing Geometric Invariance in Representation Learning

Published: January 6, 2026 | arXiv ID: 2601.03032v1

By: Vidhi Rathore

Potential Business Impact:

Makes AI fair by fixing how it sees differences.

Business Areas:
Image Recognition Data and Analytics, Software

Fairness in machine learning is increasingly critical, yet standard approaches often treat data as static points in a high-dimensional space, ignoring the underlying generative structure. We posit that sensitive attributes (e.g., race, gender) do not merely shift data distributions but causally warp the geometry of the data manifold itself. To address this, we introduce Causal Manifold Fairness (CMF), a novel framework that bridges causal inference and geometric deep learning. CMF learns a latent representation where the local Riemannian geometry, defined by the metric tensor and curvature, remains invariant under counterfactual interventions on sensitive attributes. By enforcing constraints on the Jacobian and Hessian of the decoder, CMF ensures that the rules of the latent space (distances and shapes) are preserved across demographic groups. We validate CMF on synthetic Structural Causal Models (SCMs), demonstrating that it effectively disentangles sensitive geometric warping while preserving task utility, offering a rigorous quantification of the fairness-utility trade-off via geometric metrics.

Repos / Data Links

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)