Score: 0

Entropy-Guided Token Dropout: Training Autoregressive Language Models with Limited Domain Data

Published: December 29, 2025 | arXiv ID: 2512.23422v1

By: Jiapeng Wang , Yiwen Hu , Yanzipeng Gao and more

Potential Business Impact:

Keeps smart computer programs learning longer.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

As access to high-quality, domain-specific data grows increasingly scarce, multi-epoch training has become a practical strategy for adapting large language models (LLMs). However, autoregressive models often suffer from performance degradation under repeated data exposure, where overfitting leads to a marked decline in model capability. Through empirical analysis, we trace this degradation to an imbalance in learning dynamics: predictable, low-entropy tokens are learned quickly and come to dominate optimization, while the model's ability to generalize on high-entropy tokens deteriorates with continued training. To address this, we introduce EntroDrop, an entropy-guided token dropout method that functions as structured data regularization. EntroDrop selectively masks low-entropy tokens during training and employs a curriculum schedule to adjust regularization strength in alignment with training progress. Experiments across model scales from 0.6B to 8B parameters show that EntroDrop consistently outperforms standard regularization baselines and maintains robust performance throughout extended multi-epoch training. These findings underscore the importance of aligning regularization with token-level learning dynamics when training on limited data. Our approach offers a promising pathway toward more effective adaptation of LLMs in data-constrained domains.

Country of Origin
🇨🇳 China

Page Count
14 pages

Category
Computer Science:
Computation and Language