Can Third-parties Read Our Emotions?
By: Jiayi Li , Yingfan Zhou , Pranav Narayanan Venkit and more
Potential Business Impact:
Computers better guess feelings from writing.
Natural Language Processing tasks that aim to infer an author's private states, e.g., emotions and opinions, from their written text, typically rely on datasets annotated by third-party annotators. However, the assumption that third-party annotators can accurately capture authors' private states remains largely unexamined. In this study, we present human subjects experiments on emotion recognition tasks that directly compare third-party annotations with first-party (author-provided) emotion labels. Our findings reveal significant limitations in third-party annotations-whether provided by human annotators or large language models (LLMs)-in faithfully representing authors' private states. However, LLMs outperform human annotators nearly across the board. We further explore methods to improve third-party annotation quality. We find that demographic similarity between first-party authors and third-party human annotators enhances annotation performance. While incorporating first-party demographic information into prompts leads to a marginal but statistically significant improvement in LLMs' performance. We introduce a framework for evaluating the limitations of third-party annotations and call for refined annotation practices to accurately represent and model authors' private states.
Similar Papers
Authors Should Annotate
Computation and Language
Lets writers label their own words for better AI.
Annotation and modeling of emotions in a textual corpus: an evaluative approach
Computation and Language
Computers understand feelings in writing.
When Large Language Models are Reliable for Judging Empathic Communication
Computation and Language
Computers understand feelings better than people.