Learning to accelerate Krasnosel'skii-Mann fixed-point iterations with guarantees
By: Andrea Martin, Giuseppe Belgioioso
Potential Business Impact:
Makes computer math problems solve much faster.
We introduce a principled learning to optimize (L2O) framework for solving fixed-point problems involving general nonexpansive mappings. Our idea is to deliberately inject summable perturbations into a standard Krasnosel'skii-Mann iteration to improve its average-case performance over a specific distribution of problems while retaining its convergence guarantees. Under a metric sub-regularity assumption, we prove that the proposed parametrization includes only iterations that locally achieve linear convergence-up to a vanishing bias term-and that it encompasses all iterations that do so at a sufficiently fast rate. We then demonstrate how our framework can be used to augment several widely-used operator splitting methods to accelerate the solution of structured monotone inclusion problems, and validate our approach on a best approximation problem using an L2O-augmented Douglas-Rachford splitting algorithm.
Similar Papers
Learning Provably Improves the Convergence of Gradient Descent
Machine Learning (CS)
Makes AI solve hard math problems faster.
A Class of Accelerated Fixed-Point-Based Methods with Delayed Inexact Oracles and Its Applications
Optimization and Control
Makes computer math problems solve much faster.
Learning Provably Improves the Convergence of Gradient Descent
Machine Learning (CS)
Teaches computers to solve problems faster.