Can LLMs extract human-like fine-grained evidence for evidence-based fact-checking?
By: Antonín Jarolím, Martin Fajčík, Lucia Makaiová
Potential Business Impact:
Helps computers find truth in online comments.
Misinformation frequently spreads in user comments under online news articles, highlighting the need for effective methods to detect factually incorrect information. To strongly support or refute claims extracted from such comments, it is necessary to identify relevant documents and pinpoint the exact text spans that justify or contradict each claim. This paper focuses on the latter task -- fine-grained evidence extraction for Czech and Slovak claims. We create new dataset, containing two-way annotated fine-grained evidence created by paid annotators. We evaluate large language models (LLMs) on this dataset to assess their alignment with human annotations. The results reveal that LLMs often fail to copy evidence verbatim from the source text, leading to invalid outputs. Error-rate analysis shows that the {llama3.1:8b model achieves a high proportion of correct outputs despite its relatively small size, while the gpt-oss-120b model underperforms despite having many more parameters. Furthermore, the models qwen3:14b, deepseek-r1:32b, and gpt-oss:20b demonstrate an effective balance between model size and alignment with human annotations.
Similar Papers
Large Language Models for Full-Text Methods Assessment: A Case Study on Mediation Analysis
Computation and Language
Helps computers understand science papers better.
Ground Truth Generation for Multilingual Historical NLP using LLMs
Computation and Language
Helps computers understand old books and writings.
Comparing LLM Text Annotation Skills: A Study on Human Rights Violations in Social Media Data
Computation and Language
Helps computers find human rights issues in text.