Will Annotators Disagree? Identifying Subjectivity in Value-Laden Arguments
By: Amir Homayounirad , Enrico Liscio , Tong Wang and more
Potential Business Impact:
Finds arguments people might see differently.
Aggregating multiple annotations into a single ground truth label may hide valuable insights into annotator disagreement, particularly in tasks where subjectivity plays a crucial role. In this work, we explore methods for identifying subjectivity in recognizing the human values that motivate arguments. We evaluate two main approaches: inferring subjectivity through value prediction vs. directly identifying subjectivity. Our experiments show that direct subjectivity identification significantly improves the model performance of flagging subjective arguments. Furthermore, combining contrastive loss with binary cross-entropy loss does not improve performance but reduces the dependency on per-label subjectivity. Our proposed methods can help identify arguments that individuals may interpret differently, fostering a more nuanced annotation process.
Similar Papers
Taking a SEAT: Predicting Value Interpretations from Sentiment, Emotion, Argument, and Topic Annotations
Computation and Language
AI learns how people see the world differently.
Investigating Subjective Factors of Argument Strength: Storytelling, Emotions, and Hedging
Computation and Language
Makes arguments more convincing with stories and feelings.
Towards Characterizing Subjectivity of Individuals through Modeling Value Conflicts and Trade-offs
Computation and Language
Helps computers understand why people make choices.