Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features
By: Miguel A. Lago , Ghada Zamzmi , Brandon Eich and more
Potential Business Impact:
Checks if AI's "thinking" makes sense.
Explainability features are intended to provide insight into the internal mechanisms of an AI device, but there is a lack of evaluation techniques for assessing the quality of provided explanations. We propose a framework to assess and report explainable AI features. Our evaluation framework for AI explainability is based on four criteria: 1) Consistency quantifies the variability of explanations to similar inputs, 2) Plausibility estimates how close the explanation is to the ground truth, 3) Fidelity assesses the alignment between the explanation and the model internal mechanisms, and 4) Usefulness evaluates the impact on task performance of the explanation. Finally, we developed a scorecard for AI explainability methods that serves as a complete description and evaluation to accompany this type of algorithm. We describe these four criteria and give examples on how they can be evaluated. As a case study, we use Ablation CAM and Eigen CAM to illustrate the evaluation of explanation heatmaps on the detection of breast lesions on synthetic mammographies. The first three criteria are evaluated for clinically-relevant scenarios. Our proposed framework establishes criteria through which the quality of explanations provided by AI models can be evaluated. We intend for our framework to spark a dialogue regarding the value provided by explainability features and help improve the development and evaluation of AI-based medical devices.
Similar Papers
Assessing AI Explainability: A Usability Study Using a Novel Framework Involving Clinicians
Human-Computer Interaction
Helps doctors understand AI for better patient care.
Towards an Evaluation Framework for Explainable Artificial Intelligence Systems for Health and Well-being
Artificial Intelligence
Helps doctors trust computer health advice.
Unifying VXAI: A Systematic Review and Framework for the Evaluation of Explainable AI
Machine Learning (CS)
Helps AI explain its decisions clearly.