Score: 0

Can LLMs extract human-like fine-grained evidence for evidence-based fact-checking?

Published: November 26, 2025 | arXiv ID: 2511.21401v1

By: Antonín Jarolím, Martin Fajčík, Lucia Makaiová

Potential Business Impact:

Helps computers find truth in online comments.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Misinformation frequently spreads in user comments under online news articles, highlighting the need for effective methods to detect factually incorrect information. To strongly support or refute claims extracted from such comments, it is necessary to identify relevant documents and pinpoint the exact text spans that justify or contradict each claim. This paper focuses on the latter task -- fine-grained evidence extraction for Czech and Slovak claims. We create new dataset, containing two-way annotated fine-grained evidence created by paid annotators. We evaluate large language models (LLMs) on this dataset to assess their alignment with human annotations. The results reveal that LLMs often fail to copy evidence verbatim from the source text, leading to invalid outputs. Error-rate analysis shows that the {llama3.1:8b model achieves a high proportion of correct outputs despite its relatively small size, while the gpt-oss-120b model underperforms despite having many more parameters. Furthermore, the models qwen3:14b, deepseek-r1:32b, and gpt-oss:20b demonstrate an effective balance between model size and alignment with human annotations.

Country of Origin
🇨🇿 Czech Republic

Page Count
11 pages

Category
Computer Science:
Computation and Language