Safe-EF: Error Feedback for Nonsmooth Constrained Optimization
By: Rustem Islamov, Yarden As, Ilyas Fatkhullin
Potential Business Impact:
Robots learn faster, safer, with less data sent.
Federated learning faces severe communication bottlenecks due to the high dimensionality of model updates. Communication compression with contractive compressors (e.g., Top-K) is often preferable in practice but can degrade performance without proper handling. Error feedback (EF) mitigates such issues but has been largely restricted for smooth, unconstrained problems, limiting its real-world applicability where non-smooth objectives and safety constraints are critical. We advance our understanding of EF in the canonical non-smooth convex setting by establishing new lower complexity bounds for first-order algorithms with contractive compression. Next, we propose Safe-EF, a novel algorithm that matches our lower bound (up to a constant) while enforcing safety constraints essential for practical applications. Extending our approach to the stochastic setting, we bridge the gap between theory and practical implementation. Extensive experiments in a reinforcement learning setup, simulating distributed humanoid robot training, validate the effectiveness of Safe-EF in ensuring safety and reducing communication complexity.
Similar Papers
Tight analyses of first-order methods with error feedback
Machine Learning (CS)
Makes computers learn faster by talking less.
Composite Optimization with Error Feedback: the Dual Averaging Approach
Optimization and Control
Makes computers learn faster with less data.
Improved Convergence in Parameter-Agnostic Error Feedback through Momentum
Optimization and Control
Makes AI learn faster without needing expert settings.