Score: 1

Dynamic Feedback Engines: Layer-Wise Control for Self-Regulating Continual Learning

Published: December 25, 2025 | arXiv ID: 2512.21743v1

By: Hengyi Wu, Zhenyi Wang, Heng Huang

Potential Business Impact:

Teaches computers to learn new things without forgetting old ones.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Continual learning aims to acquire new tasks while preserving performance on previously learned ones, but most methods struggle with catastrophic forgetting. Existing approaches typically treat all layers uniformly, often trading stability for plasticity or vice versa. However, different layers naturally exhibit varying levels of uncertainty (entropy) when classifying tasks. High-entropy layers tend to underfit by failing to capture task-specific patterns, while low-entropy layers risk overfitting by becoming overly confident and specialized. To address this imbalance, we propose an entropy-aware continual learning method that employs a dynamic feedback mechanism to regulate each layer based on its entropy. Specifically, our approach reduces entropy in high-entropy layers to mitigate underfitting and increases entropy in overly confident layers to alleviate overfitting. This adaptive regulation encourages the model to converge to wider local minima, which have been shown to improve generalization. Our method is general and can be seamlessly integrated with both replay- and regularization-based approaches. Experiments on various datasets demonstrate substantial performance gains over state-of-the-art continual learning baselines.

Country of Origin
🇺🇸 United States

Page Count
14 pages

Category
Computer Science:
Machine Learning (CS)