Hybrid Learners Do Not Forget: A Brain-Inspired Neuro-Symbolic Approach to Continual Learning
By: Amin Banayeeanzade, Mohammad Rostami
Potential Business Impact:
AI learns new things without forgetting old ones.
Continual learning is crucial for creating AI agents that can learn and improve themselves autonomously. A primary challenge in continual learning is to learn new tasks without losing previously learned knowledge. Current continual learning methods primarily focus on enabling a neural network with mechanisms that mitigate forgetting effects. Inspired by the two distinct systems in the human brain, System 1 and System 2, we propose a Neuro-Symbolic Brain-Inspired Continual Learning (NeSyBiCL) framework that incorporates two subsystems to solve continual learning: A neural network model responsible for quickly adapting to the most recent task, together with a symbolic reasoner responsible for retaining previously acquired knowledge from previous tasks. Moreover, we design an integration mechanism between these components to facilitate knowledge transfer from the symbolic reasoner to the neural network. We also introduce two compositional continual learning benchmarks and demonstrate that NeSyBiCL is effective and leads to superior performance compared to continual learning methods that merely rely on neural architectures to address forgetting.
Similar Papers
NeSyC: A Neuro-symbolic Continual Learner For Complex Embodied Tasks In Open Domains
Artificial Intelligence
Teaches robots to learn and do new things.
Semi-parametric Memory Consolidation: Towards Brain-like Deep Continual Learning
Machine Learning (CS)
Computers remember old lessons while learning new ones.
Personalized Artificial General Intelligence (AGI) via Neuroscience-Inspired Continuous Learning Systems
Artificial Intelligence
Makes AI learn like a brain on your phone.