Bias in, Bias out: Annotation Bias in Multilingual Large Language Models
By: Xia Cui, Ziyi Huang, Naeemeh Adel
Potential Business Impact:
Fixes unfairness in AI language learning.
Annotation bias in NLP datasets remains a major challenge for developing multilingual Large Language Models (LLMs), particularly in culturally diverse settings. Bias from task framing, annotator subjectivity, and cultural mismatches can distort model outputs and exacerbate social harms. We propose a comprehensive framework for understanding annotation bias, distinguishing among instruction bias, annotator bias, and contextual and cultural bias. We review detection methods (including inter-annotator agreement, model disagreement, and metadata analysis) and highlight emerging techniques such as multilingual model divergence and cultural inference. We further outline proactive and reactive mitigation strategies, including diverse annotator recruitment, iterative guideline refinement, and post-hoc model adjustments. Our contributions include: (1) a typology of annotation bias; (2) a synthesis of detection metrics; (3) an ensemble-based bias mitigation approach adapted for multilingual settings, and (4) an ethical analysis of annotation processes. Together, these insights aim to inform more equitable and culturally grounded annotation pipelines for LLMs.
Similar Papers
Social Bias in Multilingual Language Models: A Survey
Computation and Language
Fixes computer language bias across cultures.
Social Bias in Multilingual Language Models: A Survey
Computation and Language
Fixes computer language bias for everyone.
Simulating a Bias Mitigation Scenario in Large Language Models
Computation and Language
Fixes computer language to be fair and honest.