Trust in Transparency: How Explainable AI Shapes User Perceptions
By: Allen Daniel Sunny
Potential Business Impact:
Helps AI explain loan choices fairly.
This study explores the integration of contextual explanations into AI-powered loan decision systems to enhance trust and usability. While traditional AI systems rely heavily on algorithmic transparency and technical accuracy, they often fail to account for broader social and economic contexts. Through a qualitative study, I investigated user interactions with AI explanations and identified key gaps, in- cluding the inability of current systems to provide context. My findings underscore the limitations of purely technical transparency and the critical need for contex- tual explanations that bridge the gap between algorithmic outputs and real-world decision-making. By aligning explanations with user needs and broader societal factors, the system aims to foster trust, improve decision-making, and advance the design of human-centered AI systems
Similar Papers
Preliminary Quantitative Study on Explainability and Trust in AI Systems
Artificial Intelligence
Makes AI loan decisions easier to trust.
Is Trust Correlated With Explainability in AI? A Meta-Analysis
Artificial Intelligence
Makes AI more trustworthy by explaining its decisions.
Too Much to Trust? Measuring the Security and Cognitive Impacts of Explainability in AI-Driven SOCs
Cryptography and Security
Helps security experts trust computer threat warnings.