Agglomerating Large Vision Encoders via Distillation for VFSS Segmentation
By: Chengxi Zeng , Yuxuan Jiang , Fan Zhang and more
Potential Business Impact:
Teaches small AI to see like big AI.
The deployment of foundation models for medical imaging has demonstrated considerable success. However, their training overheads associated with downstream tasks remain substantial due to the size of the image encoders employed, and the inference complexity is also significantly high. Although lightweight variants have been obtained for these foundation models, their performance is constrained by their limited model capacity and suboptimal training strategies. In order to achieve an improved tradeoff between complexity and performance, we propose a new framework to improve the performance of low complexity models via knowledge distillation from multiple large medical foundation models (e.g., MedSAM, RAD-DINO, MedCLIP), each specializing in different vision tasks, with the goal to effectively bridge the performance gap for medical image segmentation tasks. The agglomerated model demonstrates superior generalization across 12 segmentation tasks, whereas specialized models require explicit training for each task. Our approach achieved an average performance gain of 2\% in Dice coefficient compared to simple distillation.
Similar Papers
Task-Specific Knowledge Distillation from the Vision Foundation Model for Enhanced Medical Image Segmentation
CV and Pattern Recognition
Teaches computers to see diseases in X-rays.
DINOv2-powered Few-Shot Semantic Segmentation: A Unified Framework via Cross-Model Distillation and 4D Correlation Mining
CV and Pattern Recognition
Teaches computers to recognize new things with few examples.
From SAM to DINOv2: Towards Distilling Foundation Models to Lightweight Baselines for Generalized Polyp Segmentation
CV and Pattern Recognition
Helps doctors find cancer polyps better and faster.