Effortless Vision-Language Model Specialization in Histopathology without Annotation
By: Jingna Qiu , Nishanth Jain , Jonas Ammeling and more
Potential Business Impact:
Teaches AI to see tiny body parts better.
Recent advances in Vision-Language Models (VLMs) in histopathology, such as CONCH and QuiltNet, have demonstrated impressive zero-shot classification capabilities across various tasks. However, their general-purpose design may lead to suboptimal performance in specific downstream applications. While supervised fine-tuning methods address this issue, they require manually labeled samples for adaptation. This paper investigates annotation-free adaptation of VLMs through continued pretraining on domain- and task-relevant image-caption pairs extracted from existing databases. Our experiments on two VLMs, CONCH and QuiltNet, across three downstream tasks reveal that these pairs substantially enhance both zero-shot and few-shot performance. Notably, with larger training sizes, continued pretraining matches the performance of few-shot methods while eliminating manual labeling. Its effectiveness, task-agnostic design, and annotation-free workflow make it a promising pathway for adapting VLMs to new histopathology tasks. Code is available at https://github.com/DeepMicroscopy/Annotation-free-VLM-specialization.
Similar Papers
Investigating Zero-Shot Diagnostic Pathology in Vision-Language Models with Efficient Prompt Design
CV and Pattern Recognition
Helps doctors find cancer faster using AI pictures.
How Good is my Histopathology Vision-Language Foundation Model? A Holistic Benchmark
Image and Video Processing
Helps doctors find cancer faster and more accurately.
Leveraging Vision-Language Embeddings for Zero-Shot Learning in Histopathology Images
CV and Pattern Recognition
Helps doctors find diseases in pictures without training.