Score: 1

Evaluating the Impact of LLM-Assisted Annotation in a Perspectivized Setting: the Case of FrameNet Annotation

Published: October 29, 2025 | arXiv ID: 2510.25904v1

By: Frederico Belcavello , Ely Matos , Arthur Lorenzi and more

Potential Business Impact:

Helps computers understand language faster and better.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

The use of LLM-based applications as a means to accelerate and/or substitute human labor in the creation of language resources and dataset is a reality. Nonetheless, despite the potential of such tools for linguistic research, comprehensive evaluation of their performance and impact on the creation of annotated datasets, especially under a perspectivized approach to NLP, is still missing. This paper contributes to reduction of this gap by reporting on an extensive evaluation of the (semi-)automatization of FrameNet-like semantic annotation by the use of an LLM-based semantic role labeler. The methodology employed compares annotation time, coverage and diversity in three experimental settings: manual, automatic and semi-automatic annotation. Results show that the hybrid, semi-automatic annotation setting leads to increased frame diversity and similar annotation coverage, when compared to the human-only setting, while the automatic setting performs considerably worse in all metrics, except for annotation time.

Country of Origin
🇧🇷 🇸🇪 Sweden, Brazil

Page Count
11 pages

Category
Computer Science:
Computation and Language