Usability Testing of an Explainable AI-enhanced Tool for Clinical Decision Support: Insights from the Reflexive Thematic Analysis
By: Mohammad Golam Kibria, Lauren Kucirka, Javed Mostafa
Potential Business Impact:
Helps doctors trust AI for better patient care.
Artificial intelligence-augmented technology represents a considerable opportunity for improving healthcare delivery. Significant progress has been made to demonstrate the value of complex models to enhance clinicians` efficiency in decision-making. However, the clinical adoption of such models is scarce due to multifaceted implementation issues, with the explainability of AI models being among them. One of the substantially documented areas of concern is the unclear AI explainability that negatively influences clinicians` considerations for accepting the complex model. With a usability study engaging 20 U.S.-based clinicians and following the qualitative reflexive thematic analysis, this study develops and presents a concrete framework and an operational definition of explainability. The framework can inform the required customizations and feature developments in AI tools to support clinicians` preferences and enhance their acceptance.
Similar Papers
Assessing AI Explainability: A Usability Study Using a Novel Framework Involving Clinicians
Human-Computer Interaction
Helps doctors understand AI for better patient care.
A Systematic Review of User-Centred Evaluation of Explainable AI in Healthcare
Human-Computer Interaction
Helps doctors trust AI by testing how it explains things.
A User Study Evaluating Argumentative Explanations in Diagnostic Decision Support
Artificial Intelligence
Helps doctors trust AI for better patient care.