Score: 0

Gradient Flow Matching for Learning Update Dynamics in Neural Network Training

Published: May 26, 2025 | arXiv ID: 2505.20221v1

By: Xiao Shou, Yanna Ding, Jianxi Gao

Potential Business Impact:

Predicts computer learning results faster.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Training deep neural networks remains computationally intensive due to the itera2 tive nature of gradient-based optimization. We propose Gradient Flow Matching (GFM), a continuous-time modeling framework that treats neural network training as a dynamical system governed by learned optimizer-aware vector fields. By leveraging conditional flow matching, GFM captures the underlying update rules of optimizers such as SGD, Adam, and RMSprop, enabling smooth extrapolation of weight trajectories toward convergence. Unlike black-box sequence models, GFM incorporates structural knowledge of gradient-based updates into the learning objective, facilitating accurate forecasting of final weights from partial training sequences. Empirically, GFM achieves forecasting accuracy that is competitive with Transformer-based models and significantly outperforms LSTM and other classical baselines. Furthermore, GFM generalizes across neural architectures and initializations, providing a unified framework for studying optimization dynamics and accelerating convergence prediction.

Country of Origin
🇺🇸 United States

Page Count
18 pages

Category
Computer Science:
Machine Learning (CS)