The Framework That Survives Bad Models: Human-AI Collaboration For Clinical Trials
By: Yao Chen , David Ohlssen , Aimee Readie and more
Potential Business Impact:
AI helps doctors check patient health from X-rays.
Artificial intelligence (AI) holds great promise for supporting clinical trials, from patient recruitment and endpoint assessment to treatment response prediction. However, deploying AI without safeguards poses significant risks, particularly when evaluating patient endpoints that directly impact trial conclusions. We compared two AI frameworks against human-only assessment for medical image-based disease evaluation, measuring cost, accuracy, robustness, and generalization ability. To stress-test these frameworks, we injected bad models, ranging from random guesses to naive predictions, to ensure that observed treatment effects remain valid even under severe model degradation. We evaluated the frameworks using two randomized controlled trials with endpoints derived from spinal X-ray images. Our findings indicate that using AI as a supporting reader (AI-SR) is the most suitable approach for clinical trials, as it meets all criteria across various model types, even with bad models. This method consistently provides reliable disease estimation, preserves clinical trial treatment effect estimates and conclusions, and retains these advantages when applied to different populations.
Similar Papers
A Design Framework for operationalizing Trustworthy Artificial Intelligence in Healthcare: Requirements, Tradeoffs and Challenges for its Clinical Adoption
Artificial Intelligence
Makes AI doctors safe and fair for everyone.
Data reuse enables cost-efficient randomized trials of medical AI models
Machine Learning (CS)
Lets AI doctors test new ideas faster.
Data reuse enables cost-efficient randomized trials of medical AI models
Machine Learning (CS)
Tests new AI medical tools faster and cheaper.