Semiconductor Wafer Map Defect Classification with Tiny Vision Transformers
By: Faisal Mohammad, Duksan Ryu
Potential Business Impact:
Finds tiny flaws on computer chips faster.
Semiconductor wafer defect classification is critical for ensuring high precision and yield in manufacturing. Traditional CNN-based models often struggle with class imbalances and recognition of the multiple overlapping defect types in wafer maps. To address these challenges, we propose ViT-Tiny, a lightweight Vision Transformer (ViT) framework optimized for wafer defect classification. Trained on the WM-38k dataset. ViT-Tiny outperforms its ViT-Base counterpart and state-of-the-art (SOTA) models, such as MSF-Trans and CNN-based architectures. Through extensive ablation studies, we determine that a patch size of 16 provides optimal performance. ViT-Tiny achieves an F1-score of 98.4%, surpassing MSF-Trans by 2.94% in four-defect classification, improving recall by 2.86% in two-defect classification, and increasing precision by 3.13% in three-defect classification. Additionally, it demonstrates enhanced robustness under limited labeled data conditions, making it a computationally efficient and reliable solution for real-world semiconductor defect detection.
Similar Papers
Semiconductor SEM Image Defect Classification Using Supervised and Semi-Supervised Learning with Vision Transformers
CV and Pattern Recognition
Finds tiny flaws in computer chips automatically.
SKDU at De-Factify 4.0: Vision Transformer with Data Augmentation for AI-Generated Image Detection
CV and Pattern Recognition
Finds fake pictures made by computers.
Vision Transformers: the threat of realistic adversarial patches
CV and Pattern Recognition
Tricks AI into seeing people when they aren't there.