Modeling Annotator Disagreement with Demographic-Aware Experts and Synthetic Perspectives
By: Yinuo Xu , Veronica Derricks , Allison Earl and more
Potential Business Impact:
Helps computers understand different people's opinions.
We present an approach to modeling annotator disagreement in subjective NLP tasks through both architectural and data-centric innovations. Our model, DEM-MoE (Demographic-Aware Mixture of Experts), routes inputs to expert subnetworks based on annotator demographics, enabling it to better represent structured, group-level variation compared to prior models. DEM-MoE consistently performs competitively across demographic groups, and shows especially strong results on datasets with high annotator disagreement. To address sparse demographic coverage, we test whether LLM-generated synthetic annotations via zero-shot persona prompting can be used for data imputation. We show these synthetic judgments align moderately well with human annotations on our data and offer a scalable way to potentially enrich training data. We then propose and evaluate approaches for blending real and synthetic data using strategies tailored to dataset structure. We find that the optimal strategies depend on dataset structure. Together, these contributions improve the representation of diverse perspectives.
Similar Papers
Beyond Demographics: Fine-tuning Large Language Models to Predict Individuals' Subjective Text Perceptions
Computation and Language
Computers don't understand people's backgrounds well.
Assessing the Reliability of LLMs Annotations in the Context of Demographic Bias and Model Explanation
Computation and Language
Helps computers judge sexism fairly, not by who wrote it.
Bridging the Gap: In-Context Learning for Modeling Human Disagreement
Computation and Language
Helps computers understand when people disagree.