Augmenting Intelligence: A Hybrid Framework for Scalable and Stable Explanations
By: Lawrence Krukrubo, Julius Odede, Olawande Olusegun
Current approaches to Explainable AI (XAI) face a "Scalability-Stability Dilemma." Post-hoc methods (e.g., LIME, SHAP) may scale easily but suffer from instability, while supervised explanation frameworks (e.g., TED) offer stability but require prohibitive human effort to label every training instance. This paper proposes a Hybrid LRR-TED framework that addresses this dilemma through a novel "Asymmetry of Discovery." When applied to customer churn prediction, we demonstrate that automated rule learners (GLRM) excel at identifying broad "Safety Nets" (retention patterns) but struggle to capture specific "Risk Traps" (churn triggers)-a phenomenon we term the Anna Karenina Principle of Churn. By initialising the explanation matrix with automated safety rules and augmenting it with a Pareto-optimal set of just four human-defined risk rules, our approach achieves 94.00% predictive accuracy. This configuration outperforms the full 8-rule manual expert baseline while reducing human annotation effort by 50%, proposing a shift in the paradigm for Human-in-the-Loop AI: moving experts from the role of "Rule Writers" to "Exception Handlers."
Similar Papers
Transparent Adaptive Learning via Data-Centric Multimodal Explainable AI
Artificial Intelligence
Helps computers explain their answers like a teacher.
Explainable AI in Big Data Fraud Detection
Machine Learning (CS)
Shows how computers catch fraud without secrets.
Beyond single-model XAI: aggregating multi-model explanations for enhanced trustworthiness
Machine Learning (CS)
Makes AI decisions easier to trust.