Score: 2

Biased Heritage: How Datasets Shape Models in Facial Expression Recognition

Published: March 5, 2025 | arXiv ID: 2503.03446v1

By: Iris Dominguez-Catena , Daniel Paternain , Mikel Galar and more

Potential Business Impact:

Makes AI understand faces without unfairness.

Business Areas:
Facial Recognition Data and Analytics, Software

In recent years, the rapid development of artificial intelligence (AI) systems has raised concerns about our ability to ensure their fairness, that is, how to avoid discrimination based on protected characteristics such as gender, race, or age. While algorithmic fairness is well-studied in simple binary classification tasks on tabular data, its application to complex, real-world scenarios-such as Facial Expression Recognition (FER)-remains underexplored. FER presents unique challenges: it is inherently multiclass, and biases emerge across intersecting demographic variables, each potentially comprising multiple protected groups. We present a comprehensive framework to analyze bias propagation from datasets to trained models in image-based FER systems, while introducing new bias metrics specifically designed for multiclass problems with multiple demographic groups. Our methodology studies bias propagation by (1) inducing controlled biases in FER datasets, (2) training models on these biased datasets, and (3) analyzing the correlation between dataset bias metrics and model fairness notions. Our findings reveal that stereotypical biases propagate more strongly to model predictions than representational biases, suggesting that preventing emotion-specific demographic patterns should be prioritized over general demographic balance in FER datasets. Additionally, we observe that biased datasets lead to reduced model accuracy, challenging the assumed fairness-accuracy trade-off.

Country of Origin
🇪🇸 🇧🇪 Belgium, Spain

Repos / Data Links

Page Count
17 pages

Category
Computer Science:
CV and Pattern Recognition