Score: 0

Dissecting Atomic Facts: Visual Analytics for Improving Fact Annotations in Language Model Evaluation

Published: September 1, 2025 | arXiv ID: 2509.01460v1

By: Manuel Schmidt, Daniel A. Keim, Frederik L. Dennig

Potential Business Impact:

Helps check if AI is telling the truth.

Business Areas:
Text Analytics Data and Analytics, Software

Factuality evaluation of large language model (LLM) outputs requires decomposing text into discrete "atomic" facts. However, existing definitions of atomicity are underspecified, with empirical results showing high disagreement among annotators, both human and model-based, due to unresolved ambiguity in fact decomposition. We present a visual analytics concept to expose and analyze annotation inconsistencies in fact extraction. By visualizing semantic alignment, granularity and referential dependencies, our approach aims to enable systematic inspection of extracted facts and facilitate convergence through guided revision loops, establishing a more stable foundation for factuality evaluation benchmarks and improving LLM evaluation.

Page Count
3 pages

Category
Computer Science:
Human-Computer Interaction