OctoMed: Data Recipes for State-of-the-Art Multimodal Medical Reasoning
By: Timothy Ossowski , Sheng Zhang , Qianchu Liu and more
Potential Business Impact:
Teaches AI to understand medical images and text.
High-quality and carefully curated data is a cornerstone of training medical large language models, as it directly impacts both generalization and robustness to unseen clinical tasks. We investigate strategies for training and data curation to develop a robust multimodal reasoning model in the medical domain. Our work focuses on supervised fine-tuning (SFT) and explores data recipes that leverage structured reasoning traces. Using our proposed data recipe, we scale experiments to a dataset of over 8 million examples and 6.8 billion response tokens, achieving state-of-the-art performance among open-source models across diverse out-of-distribution medical benchmark tasks. Our results further indicate that curating a high-quality, diverse training dataset with varying structured reasoning trace lengths enables the fine-tuned model to self-calibrate its reasoning trajectory lengths based on the downstream task, without explicit supervision. We present key insights, describe the data curation strategy, and outline next steps toward developing robust medical vision-language reasoning system.
Similar Papers
OpenMMReasoner: Pushing the Frontiers for Multimodal Reasoning with an Open and General Recipe
Artificial Intelligence
Teaches computers to understand pictures and words better.
MedVLThinker: Simple Baselines for Multimodal Medical Reasoning
CV and Pattern Recognition
Helps AI doctors think to diagnose better.
MedVLThinker: Simple Baselines for Multimodal Medical Reasoning
CV and Pattern Recognition
Helps AI doctors think better to diagnose illnesses.