Score: 0

Preliminary Quantitative Study on Explainability and Trust in AI Systems

Published: October 17, 2025 | arXiv ID: 2510.15769v1

By: Allen Daniel Sunny

Potential Business Impact:

Makes AI loan decisions easier to trust.

Business Areas:
Artificial Intelligence Artificial Intelligence, Data and Analytics, Science and Engineering, Software

Large-scale AI models such as GPT-4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance, raising urgent questions about trust and transparency. This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design. Using an interactive, web-based loan approval simulation, we compare how different types of explanations, ranging from basic feature importance to interactive counterfactuals influence perceived trust. Results suggest that interactivity enhances both user engagement and confidence, and that the clarity and relevance of explanations are key determinants of trust. These findings contribute empirical evidence to the growing field of human-centered explainable AI, highlighting measurable effects of explainability design on user perception

Country of Origin
🇺🇸 United States

Page Count
9 pages

Category
Computer Science:
Artificial Intelligence