Score: 0

Multi-Task Learning for Visually Grounded Reasoning in Gastrointestinal VQA

Published: November 6, 2025 | arXiv ID: 2511.04384v1

By: Itbaan Safwan , Muhammad Annas Shaikh , Muhammad Haaris and more

Potential Business Impact:

Helps doctors understand medical images better.

Business Areas:
Image Recognition Data and Analytics, Software

We present a multi-task framework for the MediaEval Medico 2025 challenge, leveraging a LoRA-tuned Florence-2 model for simultaneous visual question answering (VQA), explanation generation, and visual grounding. The proposed system integrates three curated datasets: (1) Kvasir-VQA-x1 for question-answer learning, (2) a synthetically enriched explanation dataset offering structured medical reasoning, and (3) text-to-region pairs linking visual features with segmentation masks. This multi-task setup enables the model to jointly learn visual grounding, reasoning, and interpretation, producing responses that are both accurate and interpretable. Extensive evaluation demonstrates that our approach substantially improves over single-task baselines in both answer accuracy and visual localization, highlighting the effectiveness of grounded multi-task learning for medical VQA applications.

Country of Origin
🇵🇰 Pakistan

Page Count
5 pages

Category
Computer Science:
CV and Pattern Recognition