Domain-Adaptive Continued Pre-Training of Small Language Models
By: Salman Faroz
Potential Business Impact:
Makes small AI smarter with less computer power.
Continued pre-training of small language models offers a promising path for domain adaptation with limited computational resources. I've investigated this approach within educational domains, evaluating it as a resource-efficient alternative to training models from scratch. Using a 125M parameter model, I demonstrate significant performance improvements through incremental training on 400 million tokens, followed by further training to reach 1 billion tokens. My approach includes comprehensive data preprocessing, memory-optimized training configurations, and benchmark-based evaluation. Results show notable gains in knowledge-intensive tasks (MMLU +8.1%) and contextual understanding (HellaSwag +7.6%), while revealing educational domain specialization trade-offs. I analyze token efficiency, catastrophic forgetting mitigation strategies, and scaling patterns. My findings suggest that thoughtful preprocessing and training methodologies enable meaningful improvements in language model capabilities even with constrained computational resources, opening pathways for domain-specific adaptation of smaller language models.
Similar Papers
The interplay between domain specialization and model size
Computation and Language
Makes AI smarter in specific jobs with less training.
DACP: Domain-Adaptive Continual Pre-Training of Large Language Models for Phone Conversation Summarization
Computation and Language
Makes AI better at summarizing messy conversations.
The Data Efficiency Frontier of Financial Foundation Models: Scaling Laws from Continued Pretraining
Machine Learning (CS)
Teaches computers to understand money talk better.