Examining the Metrics for Document-Level Claim Extraction in Czech and Slovak
By: Lucia Makaiová, Martin Fajčík, Antonín Jarolím
Potential Business Impact:
Checks if computer-found facts match human-found facts.
Document-level claim extraction remains an open challenge in the field of fact-checking, and subsequently, methods for evaluating extracted claims have received limited attention. In this work, we explore approaches to aligning two sets of claims pertaining to the same source document and computing their similarity through an alignment score. We investigate techniques to identify the best possible alignment and evaluation method between claim sets, with the aim of providing a reliable evaluation framework. Our approach enables comparison between model-extracted and human-annotated claim sets, serving as a metric for assessing the extraction performance of models and also as a possible measure of inter-annotator agreement. We conduct experiments on newly collected dataset-claims extracted from comments under Czech and Slovak news articles-domains that pose additional challenges due to the informal language, strong local context, and subtleties of these closely related languages. The results draw attention to the limitations of current evaluation approaches when applied to document-level claim extraction and highlight the need for more advanced methods-ones able to correctly capture semantic similarity and evaluate essential claim properties such as atomicity, checkworthiness, and decontextualization.
Similar Papers
Can LLMs extract human-like fine-grained evidence for evidence-based fact-checking?
Computation and Language
Helps computers find truth in online comments.
Comparison of Unsupervised Metrics for Evaluating Judicial Decision Extraction
Computation and Language
Checks legal documents automatically for accuracy.
Large Language Models for the Summarization of Czech Documents: From History to the Present
Computation and Language
Makes computers understand old Czech writings.