Score: 1

Examining the Metrics for Document-Level Claim Extraction in Czech and Slovak

Published: November 18, 2025 | arXiv ID: 2511.14566v1

By: Lucia Makaiová, Martin Fajčík, Antonín Jarolím

Potential Business Impact:

Checks if computer-found facts match human-found facts.

Business Areas:
Text Analytics Data and Analytics, Software

Document-level claim extraction remains an open challenge in the field of fact-checking, and subsequently, methods for evaluating extracted claims have received limited attention. In this work, we explore approaches to aligning two sets of claims pertaining to the same source document and computing their similarity through an alignment score. We investigate techniques to identify the best possible alignment and evaluation method between claim sets, with the aim of providing a reliable evaluation framework. Our approach enables comparison between model-extracted and human-annotated claim sets, serving as a metric for assessing the extraction performance of models and also as a possible measure of inter-annotator agreement. We conduct experiments on newly collected dataset-claims extracted from comments under Czech and Slovak news articles-domains that pose additional challenges due to the informal language, strong local context, and subtleties of these closely related languages. The results draw attention to the limitations of current evaluation approaches when applied to document-level claim extraction and highlight the need for more advanced methods-ones able to correctly capture semantic similarity and evaluate essential claim properties such as atomicity, checkworthiness, and decontextualization.

Repos / Data Links

Page Count
9 pages

Category
Computer Science:
Computation and Language