TrustSkin: A Fairness Pipeline for Trustworthy Facial Affect Analysis Across Skin Tone
By: Ana M. Cabanas, Alma Pedro, Domingo Mery
Potential Business Impact:
Finds unfairness in face-reading AI for dark skin.
Understanding how facial affect analysis (FAA) systems perform across different demographic groups requires reliable measurement of sensitive attributes such as ancestry, often approximated by skin tone, which itself is highly influenced by lighting conditions. This study compares two objective skin tone classification methods: the widely used Individual Typology Angle (ITA) and a perceptually grounded alternative based on Lightness ($L^*$) and Hue ($H^*$). Using AffectNet and a MobileNet-based model, we assess fairness across skin tone groups defined by each method. Results reveal a severe underrepresentation of dark skin tones ($\sim 2 \%$), alongside fairness disparities in F1-score (up to 0.08) and TPR (up to 0.11) across groups. While ITA shows limitations due to its sensitivity to lighting, the $H^*$-$L^*$ method yields more consistent subgrouping and enables clearer diagnostics through metrics such as Equal Opportunity. Grad-CAM analysis further highlights differences in model attention patterns by skin tone, suggesting variation in feature encoding. To support future mitigation efforts, we also propose a modular fairness-aware pipeline that integrates perceptual skin tone estimation, model interpretability, and fairness evaluation. These findings emphasize the relevance of skin tone measurement choices in fairness assessment and suggest that ITA-based evaluations may overlook disparities affecting darker-skinned individuals.
Similar Papers
TrueSkin: Towards Fair and Accurate Skin Tone Recognition and Generation
CV and Pattern Recognition
Makes computers see and draw skin colors better.
The Impact of Skin Tone Label Granularity on the Performance and Fairness of AI Based Dermatology Image Classification Models
CV and Pattern Recognition
Helps AI better spot skin problems on all skin tones.
Evaluating Fairness and Mitigating Bias in Machine Learning: A Novel Technique using Tensor Data and Bayesian Regression
CV and Pattern Recognition
Makes AI see skin color fairly, not just black/white.