Model-free Online Learning for the Kalman Filter: Forgetting Factor and Logarithmic Regret
By: Jiachen Qian, Yang Zheng
Potential Business Impact:
Makes predictions better for changing systems.
We consider the problem of online prediction for an unknown, non-explosive linear stochastic system. With a known system model, the optimal predictor is the celebrated Kalman filter. In the case of unknown systems, existing approaches based on recursive least squares and its variants may suffer from degraded performance due to the highly imbalanced nature of the regression model. This imbalance can easily lead to overfitting and thus degrade prediction accuracy. We tackle this problem by injecting an inductive bias into the regression model via {exponential forgetting}. While exponential forgetting is a common wisdom in online learning, it is typically used for re-weighting data. In contrast, our approach focuses on balancing the regression model. This achieves a better trade-off between {regression} and {regularization errors}, and simultaneously reduces the {accumulation error}. With new proof techniques, we also provide a sharper logarithmic regret bound of $O(\log^3 N)$, where $N$ is the number of observations.
Similar Papers
Online Learning of Nonlinear Parametric Models under Non-smooth Regularization using EKF and ADMM
Systems and Control
Teaches computers to learn from new data fast.
Non-stationary Online Learning for Curved Losses: Improved Dynamic Regret via Mixability
Machine Learning (CS)
Makes computer learning better with changing data.
Online Linear Regression with Paid Stochastic Features
Machine Learning (CS)
Learns better by choosing how much to pay for cleaner data.