Grounded Multilingual Medical Reasoning for Question Answering with Large Language Models
By: Pietro Ferrazzi, Aitor Soroa, Rodrigo Agerri
Potential Business Impact:
Helps doctors answer medical questions in many languages.
Large Language Models (LLMs) with reasoning capabilities have recently demonstrated strong potential in medical Question Answering (QA). Existing approaches are largely English-focused and primarily rely on distillation from general-purpose LLMs, raising concerns about the reliability of their medical knowledge. In this work, we present a method to generate multilingual reasoning traces grounded in factual medical knowledge. We produce 500k traces in English, Italian, and Spanish, using a retrievalaugmented generation approach over medical information from Wikipedia. The traces are generated to solve medical questions drawn from MedQA and MedMCQA, which we extend to Italian and Spanish. We test our pipeline in both in-domain and outof-domain settings across Medical QA benchmarks, and demonstrate that our reasoning traces improve performance both when utilized via in-context learning (few-shot) and supervised fine-tuning, yielding state-of-the-art results among 8B-parameter LLMs. We believe that these resources can support the development of safer, more transparent clinical decision-support tools in multilingual settings. We release the full suite of resources: reasoning traces, translated QA datasets, Medical-Wikipedia, and fine-tuned models.
Similar Papers
Disentangling Reasoning and Knowledge in Medical Large Language Models
Computation and Language
Helps AI doctors think better, not just remember.
Structured Outputs Enable General-Purpose LLMs to be Medical Experts
Computation and Language
Helps AI give safer, smarter answers about health.
Medical Reasoning in the Era of LLMs: A Systematic Review of Enhancement Techniques and Applications
Computation and Language
Boosts AI's step-by-step medical thinking