Adapting to Online Distribution Shifts in Deep Learning: A Black-Box Approach
By: Dheeraj Baby , Boran Han , Shuai Zhang and more
Potential Business Impact:
Helps computers learn better when information changes.
We study the well-motivated problem of online distribution shift in which the data arrive in batches and the distribution of each batch can change arbitrarily over time. Since the shifts can be large or small, abrupt or gradual, the length of the relevant historical data to learn from may vary over time, which poses a major challenge in designing algorithms that can automatically adapt to the best ``attention span'' while remaining computationally efficient. We propose a meta-algorithm that takes any network architecture and any Online Learner (OL) algorithm as input and produces a new algorithm which provably enhances the performance of the given OL under non-stationarity. Our algorithm is efficient (it requires maintaining only $O(\log(T))$ OL instances) and adaptive (it automatically chooses OL instances with the ideal ``attention'' length at every timestamp). Experiments on various real-world datasets across text and image modalities show that our method consistently improves the accuracy of user specified OL algorithms for classification tasks. Key novel algorithmic ingredients include a \emph{multi-resolution instance} design inspired by wavelet theory and a cross-validation-through-time technique. Both could be of independent interest.
Similar Papers
Adaptive Anomaly Detection in Evolving Network Environments
Cryptography and Security
Keeps computer security systems working even when data changes.
Out-of-Distribution Generalization in Time Series: A Survey
Machine Learning (CS)
Helps computers learn from changing data better.
Online Learning and Unlearning
Machine Learning (CS)
Lets computers forget bad data and learn new things.