Efficient Curvature-Aware Hypergradient Approximation for Bilevel Optimization
By: Youran Dong , Junfeng Yang , Wei Yao and more
Potential Business Impact:
Makes computer learning faster and smarter.
Bilevel optimization is a powerful tool for many machine learning problems, such as hyperparameter optimization and meta-learning. Estimating hypergradients (also known as implicit gradients) is crucial for developing gradient-based methods for bilevel optimization. In this work, we propose a computationally efficient technique for incorporating curvature information into the approximation of hypergradients and present a novel algorithmic framework based on the resulting enhanced hypergradient computation. We provide convergence rate guarantees for the proposed framework in both deterministic and stochastic scenarios, particularly showing improved computational complexity over popular gradient-based methods in the deterministic setting. This improvement in complexity arises from a careful exploitation of the hypergradient structure and the inexact Newton method. In addition to the theoretical speedup, numerical experiments demonstrate the significant practical performance benefits of incorporating curvature information.
Similar Papers
Differentially Private Bilevel Optimization: Efficient Algorithms with Near-Optimal Rates
Machine Learning (CS)
Protects private data in smart learning machines.
Safe Gradient Flow for Bilevel Optimization
Optimization and Control
Helps make smart decisions when one choice affects another.
Bilevel optimization for learning hyperparameters: Application to solving PDEs and inverse problems with Gaussian processes
Machine Learning (Stat)
Finds best settings for computer science problems.