Score: 1

Highlight All the Phrases: Enhancing LLM Transparency through Visual Factuality Indicators

Published: August 9, 2025 | arXiv ID: 2508.06846v1

By: Hyo Jin Do , Rachel Ostrand , Werner Geyer and more

BigTech Affiliations: IBM

Potential Business Impact:

Colors show if AI is telling the truth.

Large language models (LLMs) are susceptible to generating inaccurate or false information, often referred to as "hallucinations" or "confabulations." While several technical advancements have been made to detect hallucinated content by assessing the factuality of the model's responses, there is still limited research on how to effectively communicate this information to users. To address this gap, we conducted two scenario-based experiments with a total of 208 participants to systematically compare the effects of various design strategies for communicating factuality scores by assessing participants' ratings of trust, ease in validating response accuracy, and preference. Our findings reveal that participants preferred and trusted a design in which all phrases within a response were color-coded based on factuality scores. Participants also found it easier to validate accuracy of the response in this style compared to a baseline with no style applied. Our study offers practical design guidelines for LLM application developers and designers, aimed at calibrating user trust, aligning with user preferences, and enhancing users' ability to scrutinize LLM outputs.

Country of Origin
🇺🇸 United States

Page Count
16 pages

Category
Computer Science:
Human-Computer Interaction