Score: 0

On the effectiveness of multimodal privileged knowledge distillation in two vision transformer based diagnostic applications

Published: August 6, 2025 | arXiv ID: 2508.06558v1

By: Simon Baur , Alexandra Benova , Emilio Dolgener Cantú and more

Potential Business Impact:

Teaches AI to see better using extra info.

Deploying deep learning models in clinical practice often requires leveraging multiple data modalities, such as images, text, and structured data, to achieve robust and trustworthy decisions. However, not all modalities are always available at inference time. In this work, we propose multimodal privileged knowledge distillation (MMPKD), a training strategy that utilizes additional modalities available solely during training to guide a unimodal vision model. Specifically, we used a text-based teacher model for chest radiographs (MIMIC-CXR) and a tabular metadata-based teacher model for mammography (CBIS-DDSM) to distill knowledge into a vision transformer student model. We show that MMPKD can improve the resulting attention maps' zero-shot capabilities of localizing ROI in input images, while this effect does not generalize across domains, as contrarily suggested by prior research.

Page Count
4 pages

Category
Computer Science:
CV and Pattern Recognition