Entropy-Guided Token Dropout: Training Autoregressive Language Models with Limited Domain Data
By: Jiapeng Wang , Yiwen Hu , Yanzipeng Gao and more
Potential Business Impact:
Keeps smart computer programs learning longer.
As access to high-quality, domain-specific data grows increasingly scarce, multi-epoch training has become a practical strategy for adapting large language models (LLMs). However, autoregressive models often suffer from performance degradation under repeated data exposure, where overfitting leads to a marked decline in model capability. Through empirical analysis, we trace this degradation to an imbalance in learning dynamics: predictable, low-entropy tokens are learned quickly and come to dominate optimization, while the model's ability to generalize on high-entropy tokens deteriorates with continued training. To address this, we introduce EntroDrop, an entropy-guided token dropout method that functions as structured data regularization. EntroDrop selectively masks low-entropy tokens during training and employs a curriculum schedule to adjust regularization strength in alignment with training progress. Experiments across model scales from 0.6B to 8B parameters show that EntroDrop consistently outperforms standard regularization baselines and maintains robust performance throughout extended multi-epoch training. These findings underscore the importance of aligning regularization with token-level learning dynamics when training on limited data. Our approach offers a promising pathway toward more effective adaptation of LLMs in data-constrained domains.
Similar Papers
Few Tokens Matter: Entropy Guided Attacks on Vision-Language Models
CV and Pattern Recognition
Makes AI models safer from bad inputs.
ERGO: Entropy-guided Resetting for Generation Optimization in Multi-turn Language Models
Computation and Language
Fixes AI confusion in long chats.
Know Your Limits: Entropy Estimation Modeling for Compression and Generalization
Computation and Language
Makes computers understand and write language better.