AttentionDrop: A Novel Regularization Method for Transformer Models
By: Mirza Samad Ahmed Baig , Syeda Anshrah Gillani , Abdul Akbar Khan and more
Potential Business Impact:
Makes AI smarter and more reliable.
Transformer-based architectures achieve state-of-the-art performance across a wide range of tasks in natural language processing, computer vision, and speech processing. However, their immense capacity often leads to overfitting, especially when training data is limited or noisy. In this research, a unified family of stochastic regularization techniques has been proposed, i.e. AttentionDrop with its three different variants, which operate directly on the self-attention distributions. Hard Attention Masking randomly zeroes out top-k attention logits per query to encourage diverse context utilization, Blurred Attention Smoothing applies a dynamic Gaussian convolution over attention logits to diffuse overly peaked distributions, and Consistency-Regularized AttentionDrop enforces output stability under multiple independent AttentionDrop perturbations via a KL-based consistency loss. Results achieved in the study demonstrate that AttentionDrop consistently improves accuracy, calibration, and adversarial robustness over standard Dropout, DropConnect, and R-Drop baselines
Similar Papers
Crisp Attention: Regularizing Transformers via Structured Sparsity
Computation and Language
Makes AI smarter by using less information.
Attention-Only Transformers via Unrolled Subspace Denoising
Machine Learning (CS)
Makes AI understand things better with fewer parts.
Analytic theory of dropout regularization
Machine Learning (Stat)
Makes computer learning better by ignoring bad data.