Patient Safety Risks from AI Scribes: Signals from End-User Feedback
By: Jessica Dai , Anwen Huang , Catherine Nasrallah and more
Potential Business Impact:
AI scribes might make medical mistakes.
AI scribes are transforming clinical documentation at scale. However, their real-world performance remains understudied, especially regarding their impacts on patient safety. To this end, we initiate a mixed-methods study of patient safety issues raised in feedback submitted by AI scribe users (healthcare providers) in a large U.S. hospital system. Both quantitative and qualitative analysis suggest that AI scribes may induce various patient safety risks due to errors in transcription, most significantly regarding medication and treatment; however, further study is needed to contextualize the absolute degree of risk.
Similar Papers
FactsR: A Safer Method for Producing High Quality Healthcare Documentation
Machine Learning (CS)
AI writes doctor notes better, safer, and faster.
From Feedback to Checklists: Grounded Evaluation of AI-Generated Clinical Notes
Computation and Language
Helps doctors check AI notes for mistakes.
Understanding the Impact of Physicians' Legal Considerations on XAI Systems
Human-Computer Interaction
Helps doctors trust AI by showing how it works.