Random Gradient-Free Optimization in Infinite Dimensional Spaces
By: Caio Lins Peixoto , Daniel Csillag , Bernardo F. P. da Costa and more
In this paper, we propose a random gradient-free method for optimization in infinite dimensional Hilbert spaces, applicable to functional optimization in diverse settings. Though such problems are often solved through finite-dimensional gradient descent over a parametrization of the functions, such as neural networks, an interesting alternative is to instead perform gradient descent directly in the function space by leveraging its Hilbert space structure, thus enabling provable guarantees and fast convergence. However, infinite-dimensional gradients are often hard to compute in practice, hindering the applicability of such methods. To overcome this limitation, our framework requires only the computation of directional derivatives and a pre-basis for the Hilbert space domain, i.e., a linearly-independent set whose span is dense in the Hilbert space. This fully resolves the tractability issue, as pre-bases are much more easily obtained than full orthonormal bases or reproducing kernels -- which may not even exist -- and individual directional derivatives can be easily computed using forward-mode scalar automatic differentiation. We showcase the use of our method to solve partial differential equations à la physics informed neural networks (PINNs), where it effectively enables provable convergence.
Similar Papers
Expansive Natural Neural Gradient Flows for Energy Minimization
Optimization and Control
Teaches computers to learn and grow smarter.
Solving a Machine Learning Regression Problem Based on the Theory of Random Functions
Machine Learning (CS)
Makes computers learn without guessing rules.
Flow Straight and Fast in Hilbert Space: Functional Rectified Flow
Machine Learning (CS)
Makes AI create better, more complex images.