Large Language Models Are Effective Human Annotation Assistants, But Not Good Independent Annotators
By: Feng Gu , Zongxia Li , Carlos Rafael Colon and more
Potential Business Impact:
Helps people find important news faster.
Event annotation is important for identifying market changes, monitoring breaking news, and understanding sociological trends. Although expert annotators set the gold standards, human coding is expensive and inefficient. Unlike information extraction experiments that focus on single contexts, we evaluate a holistic workflow that removes irrelevant documents, merges documents about the same event, and annotates the events. Although LLM-based automated annotations are better than traditional TF-IDF-based methods or Event Set Curation, they are still not reliable annotators compared to human experts. However, adding LLMs to assist experts for Event Set Curation can reduce the time and mental effort required for Variable Annotation. When using LLMs to extract event variables to assist expert annotators, they agree more with the extracted variables than fully automated LLMs for annotation.
Similar Papers
Reliable Annotations with Less Effort: Evaluating LLM-Human Collaboration in Search Clarifications
Information Retrieval
Helps computers label things better with human help.
Evaluating Large Language Models as Expert Annotators
Computation and Language
Computers learn to label text like experts.
Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis
Computation and Language
Helps computers understand science papers better.