On the Hardness of Computing Counterfactual and Semifactual Explanations in XAI
By: André Artelt, Martin Olsen, Kevin Tierney
Providing clear explanations to the choices of machine learning models is essential for these models to be deployed in crucial applications. Counterfactual and semi-factual explanations have emerged as two mechanisms for providing users with insights into the outputs of their models. We provide an overview of the computational complexity results in the literature for generating these explanations, finding that in many cases, generating explanations is computationally hard. We strengthen the argument for this considerably by further contributing our own inapproximability results showing that not only are explanations often hard to generate, but under certain assumptions, they are also hard to approximate. We discuss the implications of these complexity results for the XAI community and for policymakers seeking to regulate explanations in AI.
Similar Papers
From Facts to Foils: Designing and Evaluating Counterfactual Explanations for Smart Environments
Artificial Intelligence
Helps smart homes explain why things happened.
Ranking Counterfactual Explanations
Artificial Intelligence
Shows why a computer made a choice.
Enhancing XAI Narratives through Multi-Narrative Refinement and Knowledge Distillation
Machine Learning (CS)
Makes AI decisions easy to understand with stories.