QwenCLIP: Boosting Medical Vision-Language Pretraining via LLM Embeddings and Prompt tuning
By: Xiaoyang Wei, Camille Kurtz, Florence Cloppet
Potential Business Impact:
Helps doctors understand long patient notes better.
Contrastive Language-Image Pretraining (CLIP) has demonstrated strong generalization for vision-language tasks in computer vision and medical domains, yet its text encoder accepts only up to 77 tokens, which limits its ability to represent long and information-rich radiology reports. Recent adaptations using domain-specific encoders, such as PubMedBERT or ClinicalBERT, mitigate this issue by leveraging medical corpora, but remain constrained by their limited input length (typically 512 tokens) and relatively shallow semantic understanding. To address these limitations, we propose QwenCLIP, a vision-language framework that replaces CLIP's text encoder with a large language model (LLM)-based embedding module (e.g., Qwen3-Embedding) and introduces learnable prompts to enhance cross-modal alignment. By leveraging the extended context window and richer representations of LLMs, QwenCLIP captures comprehensive medical semantics from long-form clinical text, substantially improving medical image-text alignment and downstream performance on radiology benchmarks. Our code is publicly available at https://github.com/Wxy-24/QwenCLIP.
Similar Papers
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
CV and Pattern Recognition
Lets computers understand long, many-language texts better.
ProCLIP: Progressive Vision-Language Alignment via LLM-based Embedder
CV and Pattern Recognition
Makes AI understand longer text and more languages.
uCLIP: Parameter-Efficient Multilingual Extension of Vision-Language Models with Unpaired Data
CV and Pattern Recognition
Helps computers understand pictures in many languages.