A Masked Reverse Knowledge Distillation Method Incorporating Global and Local Information for Image Anomaly Detection
By: Yuxin Jiang, Yunkang Can, Weiming Shen
Potential Business Impact:
Finds hidden flaws in pictures better.
Knowledge distillation is an effective image anomaly detection and localization scheme. However, a major drawback of this scheme is its tendency to overly generalize, primarily due to the similarities between input and supervisory signals. In order to address this issue, this paper introduces a novel technique called masked reverse knowledge distillation (MRKD). By employing image-level masking (ILM) and feature-level masking (FLM), MRKD transforms the task of image reconstruction into image restoration. Specifically, ILM helps to capture global information by differentiating input signals from supervisory signals. On the other hand, FLM incorporates synthetic feature-level anomalies to ensure that the learned representations contain sufficient local information. With these two strategies, MRKD is endowed with stronger image context capture capacity and is less likely to be overgeneralized. Experiments on the widely-used MVTec anomaly detection dataset demonstrate that MRKD achieves impressive performance: image-level 98.9% AU-ROC, pixel-level 98.4% AU-ROC, and 95.3% AU-PRO. In addition, extensive ablation experiments have validated the superiority of MRKD in mitigating the overgeneralization problem.
Similar Papers
Reinforced Multi-teacher Knowledge Distillation for Efficient General Image Forgery Detection and Localization
CV and Pattern Recognition
Finds fake pictures by looking at tiny details.
IMKD: Intensity-Aware Multi-Level Knowledge Distillation for Camera-Radar Fusion
CV and Pattern Recognition
Helps cars see better with radar and cameras.
Distilling Future Temporal Knowledge with Masked Feature Reconstruction for 3D Object Detection
CV and Pattern Recognition
Helps self-driving cars see the future.