Just Put a Human in the Loop? Investigating LLM-Assisted Annotation for Subjective Tasks
By: Hope Schroeder, Deb Roy, Jad Kabbara
Potential Business Impact:
AI suggestions change how people label things.
LLM use in annotation is becoming widespread, and given LLMs' overall promising performance and speed, simply "reviewing" LLM annotations in interpretive tasks can be tempting. In subjective annotation tasks with multiple plausible answers, reviewing LLM outputs can change the label distribution, impacting both the evaluation of LLM performance, and analysis using these labels in a social science task downstream. We conducted a pre-registered experiment with 410 unique annotators and over 7,000 annotations testing three AI assistance conditions against controls, using two models, and two datasets. We find that presenting crowdworkers with LLM-generated annotation suggestions did not make them faster, but did improve their self-reported confidence in the task. More importantly, annotators strongly took the LLM suggestions, significantly changing the label distribution compared to the baseline. When these labels created with LLM assistance are used to evaluate LLM performance, reported model performance significantly increases. We believe our work underlines the importance of understanding the impact of LLM-assisted annotation on subjective, qualitative tasks, on the creation of gold data for training and testing, and on the evaluation of NLP systems on subjective tasks.
Similar Papers
Evaluating Large Language Models as Expert Annotators
Computation and Language
Computers learn to label text like experts.
Evaluating the Impact of LLM-Assisted Annotation in a Perspectivized Setting: the Case of FrameNet Annotation
Computation and Language
Helps computers understand language faster and better.
A Multi-Task Evaluation of LLMs' Processing of Academic Text Input
Computation and Language
Computers can't yet judge science papers well.