A Fully Transformer Based Multimodal Framework for Explainable Cancer Image Segmentation Using Radiology Reports
By: Enobong Adahada , Isabel Sassoon , Kate Hone and more
Potential Business Impact:
Finds breast cancer tumors in ultrasounds better.
We introduce Med-CTX, a fully transformer based multimodal framework for explainable breast cancer ultrasound segmentation. We integrate clinical radiology reports to boost both performance and interpretability. Med-CTX achieves exact lesion delineation by using a dual-branch visual encoder that combines ViT and Swin transformers, as well as uncertainty aware fusion. Clinical language structured with BI-RADS semantics is encoded by BioClinicalBERT and combined with visual features utilising cross-modal attention, allowing the model to provide clinically grounded, model generated explanations. Our methodology generates segmentation masks, uncertainty maps, and diagnostic rationales all at once, increasing confidence and transparency in computer assisted diagnosis. On the BUS-BRA dataset, Med-CTX achieves a Dice score of 99% and an IoU of 95%, beating existing baselines U-Net, ViT, and Swin. Clinical text plays a key role in segmentation accuracy and explanation quality, as evidenced by ablation studies that show a -5.4% decline in Dice score and -31% in CIDEr. Med-CTX achieves good multimodal alignment (CLIP score: 85%) and increased confi dence calibration (ECE: 3.2%), setting a new bar for trustworthy, multimodal medical architecture.
Similar Papers
Radiology Report Generation with Layer-Wise Anatomical Attention
CV and Pattern Recognition
Helps doctors write X-ray reports faster.
MEDFORM: A Foundation Model for Contrastive Learning of CT Imaging and Clinical Numeric Data in Multi-Cancer Analysis
CV and Pattern Recognition
Helps doctors find cancer better with scans and notes.
XBusNet: Text-Guided Breast Ultrasound Segmentation via Multimodal Vision-Language Learning
CV and Pattern Recognition
Helps doctors find tiny tumors in ultrasound images.