Bridging Human Cognition and AI: A Framework for Explainable Decision-Making Systems
By: N. Jean, G. Le Pera
Potential Business Impact:
Makes AI easier to understand and trust.
Explainability in AI and ML models is critical for fostering trust, ensuring accountability, and enabling informed decision making in high stakes domains. Yet this objective is often unmet in practice. This paper proposes a general purpose framework that bridges state of the art explainability techniques with Malle's five category model of behavior explanation: Knowledge Structures, Simulation/Projection, Covariation, Direct Recall, and Rationalization. The framework is designed to be applicable across AI assisted decision making systems, with the goal of enhancing transparency, interpretability, and user trust. We demonstrate its practical relevance through real world case studies, including credit risk assessment and regulatory analysis powered by large language models (LLMs). By aligning technical explanations with human cognitive mechanisms, the framework lays the groundwork for more comprehensible, responsible, and ethical AI systems.
Similar Papers
A Framework for Causal Concept-based Model Explanations
Artificial Intelligence
Explains how AI makes decisions using simple ideas.
Mind the XAI Gap: A Human-Centered LLM Framework for Democratizing Explainable AI
Machine Learning (CS)
Explains AI decisions for everyone, not just experts.
A Conceptual Framework for AI-based Decision Systems in Critical Infrastructures
Computers and Society
Helps AI and people work safely together.