Score: 0

Dual-Model Weight Selection and Self-Knowledge Distillation for Medical Image Classification

Published: August 28, 2025 | arXiv ID: 2508.20461v1

By: Ayaka Tsutsumi , Guang Li , Ren Togo and more

Potential Business Impact:

Makes small computers diagnose diseases from scans.

Business Areas:
Image Recognition Data and Analytics, Software

We propose a novel medical image classification method that integrates dual-model weight selection with self-knowledge distillation (SKD). In real-world medical settings, deploying large-scale models is often limited by computational resource constraints, which pose significant challenges for their practical implementation. Thus, developing lightweight models that achieve comparable performance to large-scale models while maintaining computational efficiency is crucial. To address this, we employ a dual-model weight selection strategy that initializes two lightweight models with weights derived from a large pretrained model, enabling effective knowledge transfer. Next, SKD is applied to these selected models, allowing the use of a broad range of initial weight configurations without imposing additional excessive computational cost, followed by fine-tuning for the target classification tasks. By combining dual-model weight selection with self-knowledge distillation, our method overcomes the limitations of conventional approaches, which often fail to retain critical information in compact models. Extensive experiments on publicly available datasets-chest X-ray images, lung computed tomography scans, and brain magnetic resonance imaging scans-demonstrate the superior performance and robustness of our approach compared to existing methods.

Country of Origin
🇯🇵 Japan

Page Count
12 pages

Category
Computer Science:
CV and Pattern Recognition