Evaluating the Impact of LLM-Assisted Annotation in a Perspectivized Setting: the Case of FrameNet Annotation
By: Frederico Belcavello , Ely Matos , Arthur Lorenzi and more
Potential Business Impact:
Helps computers understand language faster and better.
The use of LLM-based applications as a means to accelerate and/or substitute human labor in the creation of language resources and dataset is a reality. Nonetheless, despite the potential of such tools for linguistic research, comprehensive evaluation of their performance and impact on the creation of annotated datasets, especially under a perspectivized approach to NLP, is still missing. This paper contributes to reduction of this gap by reporting on an extensive evaluation of the (semi-)automatization of FrameNet-like semantic annotation by the use of an LLM-based semantic role labeler. The methodology employed compares annotation time, coverage and diversity in three experimental settings: manual, automatic and semi-automatic annotation. Results show that the hybrid, semi-automatic annotation setting leads to increased frame diversity and similar annotation coverage, when compared to the human-only setting, while the automatic setting performs considerably worse in all metrics, except for annotation time.
Similar Papers
Just Put a Human in the Loop? Investigating LLM-Assisted Annotation for Subjective Tasks
Computers and Society
AI suggestions change how people label things.
Evaluating Large Language Models as Expert Annotators
Computation and Language
Computers learn to label text like experts.
Augmenting Image Annotation: A Human-LMM Collaborative Framework for Efficient Object Selection and Label Generation
CV and Pattern Recognition
AI helps label pictures faster for computers.