Composite Optimization with Error Feedback: the Dual Averaging Approach
By: Yuan Gao , Anton Rodomanov , Jeremy Rack and more
Potential Business Impact:
Makes computers learn faster with less data.
Communication efficiency is a central challenge in distributed machine learning training, and message compression is a widely used solution. However, standard Error Feedback (EF) methods (Seide et al., 2014), though effective for smooth unconstrained optimization with compression (Karimireddy et al., 2019), fail in the broader and practically important setting of composite optimization, which captures, e.g., objectives consisting of a smooth loss combined with a non-smooth regularizer or constraints. The theoretical foundation and behavior of EF in the context of the general composite setting remain largely unexplored. In this work, we consider composite optimization with EF. We point out that the basic EF mechanism and its analysis no longer stand when a composite part is involved. We argue that this is because of a fundamental limitation in the method and its analysis technique. We propose a novel method that combines Dual Averaging with EControl (Gao et al., 2024), a state-of-the-art variant of the EF mechanism, and achieves for the first time a strong convergence analysis for composite optimization with error feedback. Along with our new algorithm, we also provide a new and novel analysis template for inexact dual averaging method, which might be of independent interest. We also provide experimental results to complement our theoretical findings.
Similar Papers
Tight analyses of first-order methods with error feedback
Machine Learning (CS)
Makes computers learn faster by talking less.
Improved Convergence in Parameter-Agnostic Error Feedback through Momentum
Optimization and Control
Makes AI learn faster without needing expert settings.
Safe-EF: Error Feedback for Nonsmooth Constrained Optimization
Machine Learning (CS)
Robots learn faster, safer, with less data sent.