Rashomon in the Streets: Explanation Ambiguity in Scene Understanding
By: Helge Spieker , Jørn Eirik Betten , Arnaud Gotlieb and more
Potential Business Impact:
Cars' decisions are hard to understand.
Explainable AI (XAI) is essential for validating and trusting models in safety-critical applications like autonomous driving. However, the reliability of XAI is challenged by the Rashomon effect, where multiple, equally accurate models can offer divergent explanations for the same prediction. This paper provides the first empirical quantification of this effect for the task of action prediction in real-world driving scenes. Using Qualitative Explainable Graphs (QXGs) as a symbolic scene representation, we train Rashomon sets of two distinct model classes: interpretable, pair-based gradient boosting models and complex, graph-based Graph Neural Networks (GNNs). Using feature attribution methods, we measure the agreement of explanations both within and between these classes. Our results reveal significant explanation disagreement. Our findings suggest that explanation ambiguity is an inherent property of the problem, not just a modeling artifact.
Similar Papers
Navigating the Rashomon Effect: How Personalization Can Help Adjust Interpretable Machine Learning Models to Individual Users
Machine Learning (CS)
Lets computer models show you what they learned.
"A 6 or a 9?": Ensemble Learning Through the Multiplicity of Performant Models and Explanations
Machine Learning (CS)
Finds best computer answers from many good ones.
Explainable Scene Understanding with Qualitative Representations and Graph Neural Networks
Robotics
Helps self-driving cars understand why others move.