Memory-Augmented Transformers: A Systematic Review from Neuroscience Principles to Technical Solutions
By: Parsa Omidi , Xingshuai Huang , Axel Laborieux and more
Potential Business Impact:
Computers remember more, learn longer, and think better.
Memory is fundamental to intelligence, enabling learning, reasoning, and adaptability across biological and artificial systems. While Transformer architectures excel at sequence modeling, they face critical limitations in long-range context retention, continual learning, and knowledge integration. This review presents a unified framework bridging neuroscience principles, including dynamic multi-timescale memory, selective attention, and consolidation, with engineering advances in Memory-Augmented Transformers. We organize recent progress through three taxonomic dimensions: functional objectives (context extension, reasoning, knowledge integration, adaptation), memory representations (parameter-encoded, state-based, explicit, hybrid), and integration mechanisms (attention fusion, gated control, associative retrieval). Our analysis of core memory operations (reading, writing, forgetting, and capacity management) reveals a shift from static caches toward adaptive, test-time learning systems. We identify persistent challenges in scalability and interference, alongside emerging solutions including hierarchical buffering and surprise-gated updates. This synthesis provides a roadmap toward cognitively-inspired, lifelong-learning Transformer architectures.
Similar Papers
Memory-Augmented Transformers: A Systematic Review from Neuroscience Principles to Enhanced Model Architectures
Machine Learning (CS)
Helps computers remember more, like humans.
It's All Connected: A Journey Through Test-Time Memorization, Attentional Bias, Retention, and Online Optimization
Machine Learning (CS)
Makes AI remember more and learn faster.
Memo: Training Memory-Efficient Embodied Agents with Reinforcement Learning
Artificial Intelligence
Helps robots remember and learn from past experiences.