Preliminary Quantitative Study on Explainability and Trust in AI Systems
By: Allen Daniel Sunny
Potential Business Impact:
Makes AI loan decisions easier to trust.
Large-scale AI models such as GPT-4 have accelerated the deployment of artificial intelligence across critical domains including law, healthcare, and finance, raising urgent questions about trust and transparency. This study investigates the relationship between explainability and user trust in AI systems through a quantitative experimental design. Using an interactive, web-based loan approval simulation, we compare how different types of explanations, ranging from basic feature importance to interactive counterfactuals influence perceived trust. Results suggest that interactivity enhances both user engagement and confidence, and that the clarity and relevance of explanations are key determinants of trust. These findings contribute empirical evidence to the growing field of human-centered explainable AI, highlighting measurable effects of explainability design on user perception
Similar Papers
Trust in Transparency: How Explainable AI Shapes User Perceptions
Human-Computer Interaction
Helps AI explain loan choices fairly.
Is Trust Correlated With Explainability in AI? A Meta-Analysis
Artificial Intelligence
Makes AI more trustworthy by explaining its decisions.
Can AI Explanations Make You Change Your Mind?
Human-Computer Interaction
Helps people trust AI by showing how it thinks.