RealAC: A Domain-Agnostic Framework for Realistic and Actionable Counterfactual Explanations
By: Asiful Arefeen, Shovito Barua Soumma, Hassan Ghasemzadeh
Potential Business Impact:
Helps AI explain its choices realistically.
Counterfactual explanations provide human-understandable reasoning for AI-made decisions by describing minimal changes to input features that would alter a model's prediction. To be truly useful in practice, such explanations must be realistic and feasible -- they should respect both the underlying data distribution and user-defined feasibility constraints. Existing approaches often enforce inter-feature dependencies through rigid, hand-crafted constraints or domain-specific knowledge, which limits their generalizability and ability to capture complex, nonlinear relations inherent in data. Moreover, they rarely accommodate user-specified preferences and suggest explanations that are causally implausible or infeasible to act upon. We introduce RealAC, a domain-agnostic framework for generating realistic and actionable counterfactuals. RealAC automatically preserves complex inter-feature dependencies without relying on explicit domain knowledge -- by aligning the joint distributions of feature pairs between factual and counterfactual instances. The framework also allows end-users to ``freeze'' attributes they cannot or do not wish to change by suppressing change in frozen features during optimization. Evaluations on three synthetic and two real datasets demonstrate that RealAC balances realism with actionability. Our method outperforms state-of-the-art baselines and Large Language Model-based counterfactual generation techniques in causal edge score, dependency preservation score, and IM1 realism metric and offers a solution for causality-aware and user-centric counterfactual generation.
Similar Papers
Actionable and diverse counterfactual explanations incorporating domain knowledge and causal constraints
Artificial Intelligence
Makes AI suggestions practical and believable.
Actionable Counterfactual Explanations Using Bayesian Networks and Path Planning with Applications to Environmental Quality Improvement
Artificial Intelligence
Helps computers explain decisions fairly and privately.
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
Artificial Intelligence
Helps smart homes explain why things happened.