Opt-ICL at LeWiDi-2025: Maximizing In-Context Signal from Rater Examples via Meta-Learning
By: Taylor Sorensen, Yejin Choi
Potential Business Impact:
Teaches computers to understand when people disagree.
Many natural language processing (NLP) tasks involve subjectivity, ambiguity, or legitimate disagreement between annotators. In this paper, we outline our system for modeling human variation. Our system leverages language models' (LLMs) in-context learning abilities, along with a two-step meta-learning training procedure for 1) post-training on many datasets requiring in-context learning and 2) specializing the model via in-context meta-learning to the particular data distribution of interest. We also evaluate the performance of our system submission to the Learning With Disagreements (LeWiDi) competition, where it was the overall winner on both tasks. Additionally, we perform an ablation study to measure the importance of each system component. We find that including rater examples in-context is crucial for our system's performance, dataset-specific fine-tuning is helpful on the larger datasets, post-training on other in-context datasets is helpful on one of the competition datasets, and that performance improves with model scale.
Similar Papers
Bridging the Gap: In-Context Learning for Modeling Human Disagreement
Computation and Language
Helps computers understand when people disagree.
DeMeVa at LeWiDi-2025: Modeling Perspectives with In-Context Learning and Label Distribution Learning
Computation and Language
Helps computers understand different opinions better.
Unlocking In-Context Learning for Natural Datasets Beyond Language Modelling
Computation and Language
Teaches computers to learn new things from examples.