SCALE: Upscaled Continual Learning of Large Language Models
By: Jin-woo Lee , Junhwa Choi , Bongkyu Hwang and more
Potential Business Impact:
Makes AI learn new things without forgetting old ones.
We revisit continual pre-training for large language models and argue that progress now depends more on scaling the right structure than on scaling parameters alone. We introduce SCALE, a width upscaling architecture that inserts lightweight expansion into linear modules while freezing all pre-trained parameters. This preserves the residual and attention topologies and increases capacity without perturbing the base model's original functionality. SCALE is guided by two principles: Persistent Preservation, which maintains the base model's behavior via preservation-oriented initialization and freezing of the pre-trained weights, and Collaborative Adaptation, which selectively trains a subset of expansion components to acquire new knowledge with minimal interference. We instantiate these ideas as SCALE-Preserve (preservation-first), SCALE-Adapt (adaptation-first), and SCALE-Route, an optional routing extension that performs token-level routing between preservation and adaptation heads. On a controlled synthetic biography benchmark, SCALE mitigates the severe forgetting observed with depth expansion while still acquiring new knowledge. In continual pre-training on a Korean corpus, SCALE variants achieve less forgetting on English evaluations and competitive gains on Korean benchmarks, with these variants offering the best overall stability-plasticity trade-off. Accompanying analysis clarifies when preservation provably holds and why the interplay between preservation and adaptation stabilizes optimization compared to standard continual learning setups.
Similar Papers
Grow Up and Merge: Scaling Strategies for Efficient Language Adaptation
Computation and Language
Makes computers understand many languages better.
Curriculum-Guided Layer Scaling for Language Model Pretraining
Computation and Language
Teaches computers to learn faster by growing them.
Revisiting Replay and Gradient Alignment for Continual Pre-Training of Large Language Models
Machine Learning (CS)
Keeps AI smart without forgetting old lessons.