On the Definition and Detection of Cherry-Picking in Counterfactual Explanations
By: James Hinns , Sofie Goethals , Stephan Van der Veeken and more
Potential Business Impact:
Finds when AI explanations are unfairly chosen.
Counterfactual explanations are widely used to communicate how inputs must change for a model to alter its prediction. For a single instance, many valid counterfactuals can exist, which leaves open the possibility for an explanation provider to cherry-pick explanations that better suit a narrative of their choice, highlighting favourable behaviour and withholding examples that reveal problematic behaviour. We formally define cherry-picking for counterfactual explanations in terms of an admissible explanation space, specified by the generation procedure, and a utility function. We then study to what extent an external auditor can detect such manipulation. Considering three levels of access to the explanation process: full procedural access, partial procedural access, and explanation-only access, we show that detection is extremely limited in practice. Even with full procedural access, cherry-picked explanations can remain difficult to distinguish from non cherry-picked explanations, because the multiplicity of valid counterfactuals and flexibility in the explanation specification provide sufficient degrees of freedom to mask deliberate selection. Empirically, we demonstrate that this variability often exceeds the effect of cherry-picking on standard counterfactual quality metrics such as proximity, plausibility, and sparsity, making cherry-picked explanations statistically indistinguishable from baseline explanations. We argue that safeguards should therefore prioritise reproducibility, standardisation, and procedural constraints over post-hoc detection, and we provide recommendations for algorithm developers, explanation providers, and auditors.
Similar Papers
Ranking Counterfactual Explanations
Artificial Intelligence
Shows why a computer made a choice.
Interpretable Model-Aware Counterfactual Explanations for Random Forest
Machine Learning (Stat)
Explains why computer decisions change outcomes.
Graph Diffusion Counterfactual Explanation
Machine Learning (CS)
Helps AI explain why it makes graph decisions.