Score: 0

Gender Bias in Emotion Recognition by Large Language Models

Published: November 24, 2025 | arXiv ID: 2511.19785v1

By: Maureen Herbert , Katie Sun , Angelica Lim and more

Potential Business Impact:

Makes AI understand feelings without gender bias.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The rapid advancement of large language models (LLMs) and their growing integration into daily life underscore the importance of evaluating and ensuring their fairness. In this work, we examine fairness within the domain of emotional theory of mind, investigating whether LLMs exhibit gender biases when presented with a description of a person and their environment and asked, "How does this person feel?". Furthermore, we propose and evaluate several debiasing strategies, demonstrating that achieving meaningful reductions in bias requires training based interventions rather than relying solely on inference-time prompt-based approaches such as prompt engineering.

Country of Origin
🇨🇦 Canada

Page Count
9 pages

Category
Computer Science:
Computation and Language