Gender Bias in Emotion Recognition by Large Language Models
By: Maureen Herbert , Katie Sun , Angelica Lim and more
Potential Business Impact:
Makes AI understand feelings without gender bias.
The rapid advancement of large language models (LLMs) and their growing integration into daily life underscore the importance of evaluating and ensuring their fairness. In this work, we examine fairness within the domain of emotional theory of mind, investigating whether LLMs exhibit gender biases when presented with a description of a person and their environment and asked, "How does this person feel?". Furthermore, we propose and evaluate several debiasing strategies, demonstrating that achieving meaningful reductions in bias requires training based interventions rather than relying solely on inference-time prompt-based approaches such as prompt engineering.
Similar Papers
Fluent but Unfeeling: The Emotional Blind Spots of Language Models
Computation and Language
Helps computers understand feelings more like people.
A Comprehensive Study of Implicit and Explicit Biases in Large Language Models
Machine Learning (CS)
Finds and fixes unfairness in AI writing.
Automated Evaluation of Gender Bias Across 13 Large Multimodal Models
CV and Pattern Recognition
Finds AI makes unfair pictures of jobs.