Score: 1

Domain-Adaptive Continued Pre-Training of Small Language Models

Published: April 13, 2025 | arXiv ID: 2504.09687v1

By: Salman Faroz

Potential Business Impact:

Makes small AI smarter with less computer power.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

Continued pre-training of small language models offers a promising path for domain adaptation with limited computational resources. I've investigated this approach within educational domains, evaluating it as a resource-efficient alternative to training models from scratch. Using a 125M parameter model, I demonstrate significant performance improvements through incremental training on 400 million tokens, followed by further training to reach 1 billion tokens. My approach includes comprehensive data preprocessing, memory-optimized training configurations, and benchmark-based evaluation. Results show notable gains in knowledge-intensive tasks (MMLU +8.1%) and contextual understanding (HellaSwag +7.6%), while revealing educational domain specialization trade-offs. I analyze token efficiency, catastrophic forgetting mitigation strategies, and scaling patterns. My findings suggest that thoughtful preprocessing and training methodologies enable meaningful improvements in language model capabilities even with constrained computational resources, opening pathways for domain-specific adaptation of smaller language models.

Page Count
10 pages

Category
Computer Science:
Computation and Language