Dynamic Feedback Engines: Layer-Wise Control for Self-Regulating Continual Learning
By: Hengyi Wu, Zhenyi Wang, Heng Huang
Potential Business Impact:
Teaches computers to learn new things without forgetting old ones.
Continual learning aims to acquire new tasks while preserving performance on previously learned ones, but most methods struggle with catastrophic forgetting. Existing approaches typically treat all layers uniformly, often trading stability for plasticity or vice versa. However, different layers naturally exhibit varying levels of uncertainty (entropy) when classifying tasks. High-entropy layers tend to underfit by failing to capture task-specific patterns, while low-entropy layers risk overfitting by becoming overly confident and specialized. To address this imbalance, we propose an entropy-aware continual learning method that employs a dynamic feedback mechanism to regulate each layer based on its entropy. Specifically, our approach reduces entropy in high-entropy layers to mitigate underfitting and increases entropy in overly confident layers to alleviate overfitting. This adaptive regulation encourages the model to converge to wider local minima, which have been shown to improve generalization. Our method is general and can be seamlessly integrated with both replay- and regularization-based approaches. Experiments on various datasets demonstrate substantial performance gains over state-of-the-art continual learning baselines.
Similar Papers
Dynamic Continual Learning: Harnessing Parameter Uncertainty for Improved Network Adaptation
Machine Learning (CS)
Keeps computers remembering old lessons while learning new ones.
Dynamic Nested Hierarchies: Pioneering Self-Evolution in Machine Learning Architectures for Lifelong Intelligence
Machine Learning (CS)
AI learns and changes like a brain.
Dynamic Mixture of Experts Against Severe Distribution Shifts
Machine Learning (CS)
Lets computers learn new things without forgetting old ones.