Explainability in Context: A Multilevel Framework Aligning AI Explanations with Stakeholder with LLMs
By: Marilyn Bello , Rafael Bello , Maria-Matilde García and more
Potential Business Impact:
Makes AI understandable and trustworthy for everyone.
The growing application of artificial intelligence in sensitive domains has intensified the demand for systems that are not only accurate but also explainable and trustworthy. Although explainable AI (XAI) methods have proliferated, many do not consider the diverse audiences that interact with AI systems: from developers and domain experts to end-users and society. This paper addresses how trust in AI is influenced by the design and delivery of explanations and proposes a multilevel framework that aligns explanations with the epistemic, contextual, and ethical expectations of different stakeholders. The framework consists of three layers: algorithmic and domain-based, human-centered, and social explainability. We highlight the emerging role of Large Language Models (LLMs) in enhancing the social layer by generating accessible, natural language explanations. Through illustrative case studies, we demonstrate how this approach facilitates technical fidelity, user engagement, and societal accountability, reframing XAI as a dynamic, trust-building process.
Similar Papers
Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI
Machine Learning (CS)
Explains AI decisions for everyone, not just experts.
Increasing AI Explainability by LLM Driven Standard Processes
Artificial Intelligence
Makes AI decisions clear and trustworthy.
LLMs for Explainable AI: A Comprehensive Survey
Artificial Intelligence
Makes confusing AI easy for people to understand.