Benchmark Success, Clinical Failure: When Reinforcement Learning Optimizes for Benchmarks, Not Patients
By: Armin Berger , Manuela Bergau , Helen Schneider and more
Potential Business Impact:
Helps AI understand X-rays better, but needs careful training.
Recent Reinforcement Learning (RL) advances for Large Language Models (LLMs) have improved reasoning tasks, yet their resource-constrained application to medical imaging remains underexplored. We introduce ChexReason, a vision-language model trained via R1-style methodology (SFT followed by GRPO) using only 2,000 SFT samples, 1,000 RL samples, and a single A100 GPU. Evaluations on CheXpert and NIH benchmarks reveal a fundamental tension: GRPO recovers in-distribution performance (23% improvement on CheXpert, macro-F1 = 0.346) but degrades cross-dataset transferability (19% drop on NIH). This mirrors high-resource models like NV-Reason-CXR-3B, suggesting the issue stems from the RL paradigm rather than scale. We identify a generalization paradox where the SFT checkpoint uniquely improves on NIH before optimization, indicating teacher-guided reasoning captures more institution-agnostic features. Furthermore, cross-model comparisons show structured reasoning scaffolds benefit general-purpose VLMs but offer minimal gain for medically pre-trained models. Consequently, curated supervised fine-tuning may outperform aggressive RL for clinical deployment requiring robustness across diverse populations.
Similar Papers
Enhancing Radiology Report Generation and Visual Grounding using Reinforcement Learning
Artificial Intelligence
Helps doctors read X-rays better and faster.
Med-R1: Reinforcement Learning for Generalizable Medical Reasoning in Vision-Language Models
CV and Pattern Recognition
Helps doctors understand X-rays better and faster.
Reinforcement Learning for Reasoning in Small LLMs: What Works and What Doesn't
Machine Learning (CS)
Makes small AI smarter with less money.