Artificial Intelligence Should Genuinely Support Clinical Reasoning and Decision Making To Bridge the Translational Gap
By: Kacper Sokol, James Fackler, Julia E Vogt
Potential Business Impact:
Helps doctors make better health choices with AI.
Artificial intelligence promises to revolutionise medicine, yet its impact remains limited because of the pervasive translational gap. We posit that the prevailing technology-centric approaches underpin this challenge, rendering such systems fundamentally incompatible with clinical practice, specifically diagnostic reasoning and decision making. Instead, we propose a novel sociotechnical conceptualisation of data-driven support tools designed to complement doctors' cognitive and epistemic activities. Crucially, it prioritises real-world impact over superhuman performance on inconsequential benchmarks.
Similar Papers
Limits of trust in medical AI
Machine Learning (CS)
AI can help doctors, but patients might not trust it.
A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption
Artificial Intelligence
Makes AI doctors safe and fair for everyone.
Data over dialogue: Why artificial intelligence is unlikely to humanise medicine
Computers and Society
AI might make doctors less caring and trustworthy.