Assessing AI Explainability: A Usability Study Using a Novel Framework Involving Clinicians
By: Mohammad Golam Kibria, Lauren Kucirka, Javed Mostafa
Potential Business Impact:
Helps doctors understand AI for better patient care.
An AI design framework was developed based on three core principles, namely understandability, trust, and usability. The framework was conceptualized by synthesizing evidence from the literature and by consulting with experts. The initial version of the AI Explainability Framework was validated based on an in-depth expert engagement and review process. For evaluation purposes, an AI-anchored prototype, incorporating novel explainability features, was built and deployed online. The primary function of the prototype was to predict the postpartum depression risk using analytics models. The development of the prototype was carried out in an iterative fashion, based on a pilot-level formative evaluation, followed by refinements and summative evaluation. The System Explainability Scale (SES) metric was developed to measure the influence of the three dimensions of the AI Explainability Framework. For the summative stage, a comprehensive usability test was conducted involving 20 clinicians, and the SES metric was used to assess clinicians` satisfaction with the tool. On a 5-point rating system, the tool received high scores for the usability dimension, followed by trust and understandability. The average explainability score was 4.56. In terms of understandability, trust, and usability, the average score was 4.51, 4.53 and 4.71 respectively. Overall, the 13-item SES metric showed strong internal consistency with Cronbach`s alpha of 0.84 and a positive correlation coefficient (Spearman`s rho = 0.81, p<0.001) between the composite SES score and explainability. A major finding was that the framework, combined with the SES usability metric, provides a straightforward approach for developing AI-based healthcare tools that lower the challenges associated with explainability.
Similar Papers
Evaluating Explainability: A Framework for Systematic Assessment and Reporting of Explainable AI Features
Artificial Intelligence
Checks if AI's "thinking" makes sense.
Usability Testing of an Explainable AI-enhanced Tool for Clinical Decision Support: Insights from the Reflexive Thematic Analysis
Human-Computer Interaction
Helps doctors trust AI for better patient care.
A Systematic Review of User-Centred Evaluation of Explainable AI in Healthcare
Human-Computer Interaction
Helps doctors trust AI by testing how it explains things.