Score: 1

Assessing Intersectional Bias in Representations of Pre-Trained Image Recognition Models

Published: June 4, 2025 | arXiv ID: 2506.03664v2

By: Valerie Krug, Sebastian Stober

Potential Business Impact:

Finds unfairness in computer face recognition.

Business Areas:
Image Recognition Data and Analytics, Software

Deep Learning models have achieved remarkable success. Training them is often accelerated by building on top of pre-trained models which poses the risk of perpetuating encoded biases. Here, we investigate biases in the representations of commonly used ImageNet classifiers for facial images while considering intersections of sensitive variables age, race and gender. To assess the biases, we use linear classifier probes and visualize activations as topographic maps. We find that representations in ImageNet classifiers particularly allow differentiation between ages. Less strongly pronounced, the models appear to associate certain ethnicities and distinguish genders in middle-aged groups.

Repos / Data Links

Page Count
20 pages

Category
Computer Science:
CV and Pattern Recognition