SuRe: Surprise-Driven Prioritised Replay for Continual LLM Learning
By: Hugo Hazard , Zafeirios Fountas , Martin A. Benfeghoul and more
Potential Business Impact:
Teaches AI to learn new things without forgetting old ones.
Continual learning, one's ability to adapt to a sequence of tasks without forgetting previously acquired knowledge, remains a major challenge in machine learning and a key gap between artificial and human intelligence. While regularisation and replay perform well in vision, they lag behind multi-task learning for large language models (LLMs), especially at scale with many tasks. We revisit replay and argue that two failure modes drive this gap: selection (what to rehearse) and integration (how to consolidate new knowledge). To address selection, we propose Surprise-prioritised Replay (SuRe), a simple, architecture-agnostic rule that ranks and stores the most surprising (high Negative Log-Likelihood) sequences. SuRe achieves state-of-the-art performance in the Large Number of Tasks (LNT) setting and delivers the best overall average across both Standard CL and LNT benchmarks. To address integration, we add a dual-learner design with fast and slow LoRA adapters merged via an exponential moving average (EMA), enabling rapid adaptation while stabilising long-term knowledge. Combining SuRe with the dual learner yields further gains, including improvements of up to +5 accuracy points on LNT over prior SOTA. Ablation studies confirm that our proposed method remains robust under reduced replay frequency and small buffer size, demonstrating both effectiveness and sample efficiency. Taken together, our results establish replay as a strong baseline for continual LLM fine-tuning and demonstrate that surprise-based selection and slow-weight consolidation are complementary components for mitigating catastrophic forgetting.
Similar Papers
GeRe: Towards Efficient Anti-Forgetting in Continual Learning of LLM via General Samples Replay
Computation and Language
Keeps AI smart when learning new things.
Teaching AI to Remember: Insights from Brain-Inspired Replay in Continual Learning
Machine Learning (CS)
Keeps computers remembering old lessons while learning new ones.
Dual-LoRA and Quality-Enhanced Pseudo Replay for Multimodal Continual Food Learning
Machine Learning (CS)
Teaches computers to learn new food facts without forgetting.