Efficient Whole Slide Pathology VQA via Token Compression
By: Weimin Lyu , Qingqiao Hu , Kehan Qi and more
Potential Business Impact:
Lets computers answer questions about disease slides.
Whole-slide images (WSIs) in pathology can reach up to 10,000 x 10,000 pixels, posing significant challenges for multimodal large language model (MLLM) due to long context length and high computational demands. Previous methods typically focus on patch-level analysis or slide-level classification using CLIP-based models with multi-instance learning, but they lack the generative capabilities needed for visual question answering (VQA). More recent MLLM-based approaches address VQA by feeding thousands of patch tokens directly into the language model, which leads to excessive resource consumption. To address these limitations, we propose Token Compression Pathology LLaVA (TCP-LLaVA), the first MLLM architecture to perform WSI VQA via token compression. TCP-LLaVA introduces a set of trainable compression tokens that aggregate visual and textual information through a modality compression module, inspired by the [CLS] token mechanism in BERT. Only the compressed tokens are forwarded to the LLM for answer generation, significantly reducing input length and computational cost. Experiments on ten TCGA tumor subtypes show that TCP-LLaVA outperforms existing MLLM baselines in VQA accuracy while reducing training resource consumption by a substantial margin.
Similar Papers
LoC-Path: Learning to Compress for Pathology Multimodal Large Language Models
CV and Pattern Recognition
Helps doctors find diseases on slides faster.
PathVQ: Reforming Computational Pathology Foundation Model for Whole Slide Image Analysis via Vector Quantization
CV and Pattern Recognition
Makes cancer scans faster and more accurate.
A Versatile Pathology Co-pilot via Reasoning Enhanced Multimodal Large Language Model
Image and Video Processing
Helps doctors find diseases faster using AI.