Optimal Rates in Continual Linear Regression via Increasing Regularization
By: Ran Levinstein , Amit Attia , Matan Schliserman and more
Potential Business Impact:
Teaches computers to learn new things without forgetting.
We study realizable continual linear regression under random task orderings, a common setting for developing continual learning theory. In this setup, the worst-case expected loss after $k$ learning iterations admits a lower bound of $\Omega(1/k)$. However, prior work using an unregularized scheme has only established an upper bound of $O(1/k^{1/4})$, leaving a significant gap. Our paper proves that this gap can be narrowed, or even closed, using two frequently used regularization schemes: (1) explicit isotropic $\ell_2$ regularization, and (2) implicit regularization via finite step budgets. We show that these approaches, which are used in practice to mitigate forgetting, reduce to stochastic gradient descent (SGD) on carefully defined surrogate losses. Through this lens, we identify a fixed regularization strength that yields a near-optimal rate of $O(\log k / k)$. Moreover, formalizing and analyzing a generalized variant of SGD for time-varying functions, we derive an increasing regularization strength schedule that provably achieves an optimal rate of $O(1/k)$. This suggests that schedules that increase the regularization coefficient or decrease the number of steps per task are beneficial, at least in the worst case.
Similar Papers
From Continual Learning to SGD and Back: Better Rates for Continual Linear Models
Machine Learning (CS)
Prevents AI from forgetting old lessons when learning new ones.
Large Stepsizes Accelerate Gradient Descent for Regularized Logistic Regression
Machine Learning (Stat)
Faster computer learning with big steps.
Memory-Statistics Tradeoff in Continual Learning with Structural Regularization
Machine Learning (CS)
Helps computers learn new things without forgetting old ones.