SDHSI-Net: Learning Better Representations for Hyperspectral Images via Self-Distillation
By: Prachet Dev Singh , Shyamsundar Paramasivam , Sneha Barman and more
Potential Business Impact:
Teaches computers to identify things in pictures better.
Hyperspectral image (HSI) classification presents unique challenges due to its high spectral dimensionality and limited labeled data. Traditional deep learning models often suffer from overfitting and high computational costs. Self-distillation (SD), a variant of knowledge distillation where a network learns from its own predictions, has recently emerged as a promising strategy to enhance model performance without requiring external teacher networks. In this work, we explore the application of SD to HSI by treating earlier outputs as soft targets, thereby enforcing consistency between intermediate and final predictions. This process improves intra-class compactness and inter-class separability in the learned feature space. Our approach is validated on two benchmark HSI datasets and demonstrates significant improvements in classification accuracy and robustness, highlighting the effectiveness of SD for spectral-spatial learning. Codes are available at https://github.com/Prachet-Dev-Singh/SDHSI.
Similar Papers
Bayesian Self-Distillation for Image Classification
CV and Pattern Recognition
Makes AI smarter and more trustworthy.
Dual-Stream Spectral Decoupling Distillation for Remote Sensing Object Detection
CV and Pattern Recognition
Helps computers see small things in satellite pictures.
Learning Spectral Diffusion Prior for Hyperspectral Image Reconstruction
CV and Pattern Recognition
Improves pictures by adding back lost details.