Score: 2

Bias in, Bias out: Annotation Bias in Multilingual Large Language Models

Published: November 18, 2025 | arXiv ID: 2511.14662v1

By: Xia Cui, Ziyi Huang, Naeemeh Adel

Potential Business Impact:

Fixes unfairness in AI language learning.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Annotation bias in NLP datasets remains a major challenge for developing multilingual Large Language Models (LLMs), particularly in culturally diverse settings. Bias from task framing, annotator subjectivity, and cultural mismatches can distort model outputs and exacerbate social harms. We propose a comprehensive framework for understanding annotation bias, distinguishing among instruction bias, annotator bias, and contextual and cultural bias. We review detection methods (including inter-annotator agreement, model disagreement, and metadata analysis) and highlight emerging techniques such as multilingual model divergence and cultural inference. We further outline proactive and reactive mitigation strategies, including diverse annotator recruitment, iterative guideline refinement, and post-hoc model adjustments. Our contributions include: (1) a typology of annotation bias; (2) a synthesis of detection metrics; (3) an ensemble-based bias mitigation approach adapted for multilingual settings, and (4) an ethical analysis of annotation processes. Together, these insights aim to inform more equitable and culturally grounded annotation pipelines for LLMs.

Country of Origin
🇨🇳 🇬🇧 United Kingdom, China

Page Count
16 pages

Category
Computer Science:
Computation and Language