Capacity-Constrained Continual Learning
By: Zheng Wen , Doina Precup , Benjamin Van Roy and more
Potential Business Impact:
Teaches computers to learn better with less memory.
Any agents we can possibly build are subject to capacity constraints, as memory and compute resources are inherently finite. However, comparatively little attention has been dedicated to understanding how agents with limited capacity should allocate their resources for optimal performance. The goal of this paper is to shed some light on this question by studying a simple yet relevant continual learning problem: the capacity-constrained linear-quadratic-Gaussian (LQG) sequential prediction problem. We derive a solution to this problem under appropriate technical conditions. Moreover, for problems that can be decomposed into a set of sub-problems, we also demonstrate how to optimally allocate capacity across these sub-problems in the steady state. We view the results of this paper as a first step in the systematic theoretical study of learning under capacity constraints.
Similar Papers
On the Theory of Continual Learning with Gradient Descent for Neural Networks
Machine Learning (Stat)
Helps AI remember old lessons while learning new ones.
Continual Reinforcement Learning for Cyber-Physical Systems: Lessons Learned and Open Challenges
Machine Learning (CS)
Teaches self-driving cars to learn new parking spots.
Limits To (Machine) Learning
Machine Learning (Stat)
Finds hidden money patterns machines miss.