Score: 2

Who's Asking? Evaluating LLM Robustness to Inquiry Personas in Factual Question Answering

Published: October 14, 2025 | arXiv ID: 2510.12925v1

By: Nil-Jana Akpinar , Chia-Jung Lee , Vanessa Murdock and more

BigTech Affiliations: Amazon

Potential Business Impact:

Makes AI answer truthfully, ignoring user's personal info.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Large Language Models (LLMs) should answer factual questions truthfully, grounded in objective knowledge, regardless of user context such as self-disclosed personal information, or system personalization. In this paper, we present the first systematic evaluation of LLM robustness to inquiry personas, i.e. user profiles that convey attributes like identity, expertise, or belief. While prior work has primarily focused on adversarial inputs or distractors for robustness testing, we evaluate plausible, human-centered inquiry persona cues that users disclose in real-world interactions. We find that such cues can meaningfully alter QA accuracy and trigger failure modes such as refusals, hallucinated limitations, and role confusion. These effects highlight how model sensitivity to user framing can compromise factual reliability, and position inquiry persona testing as an effective tool for robustness evaluation.

Country of Origin
🇺🇸 United States

Repos / Data Links

Page Count
12 pages

Category
Computer Science:
Computation and Language