Score: 2

On the Robustness of Verbal Confidence of LLMs in Adversarial Attacks

Published: July 9, 2025 | arXiv ID: 2507.06489v1

By: Stephen Obadinma, Xiaodan Zhu

Potential Business Impact:

Makes AI's confidence in answers more honest.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Robust verbal confidence generated by large language models (LLMs) is crucial for the deployment of LLMs to ensure transparency, trust, and safety in human-AI interactions across many high-stakes applications. In this paper, we present the first comprehensive study on the robustness of verbal confidence under adversarial attacks. We introduce a novel framework for attacking verbal confidence scores through both perturbation and jailbreak-based methods, and show that these attacks can significantly jeopardize verbal confidence estimates and lead to frequent answer changes. We examine a variety of prompting strategies, model sizes, and application domains, revealing that current confidence elicitation methods are vulnerable and that commonly used defence techniques are largely ineffective or counterproductive. Our findings underscore the urgent need to design more robust mechanisms for confidence expression in LLMs, as even subtle semantic-preserving modifications can lead to misleading confidence in responses.

Country of Origin
🇨🇦 Canada

Repos / Data Links

Page Count
42 pages

Category
Computer Science:
Computation and Language