Score: 0

Convergence of gradient flow for learning convolutional neural networks

Published: January 13, 2026 | arXiv ID: 2601.08547v1

By: Jona-Maria Diederen, Holger Rauhut, Ulrich Terstiege

Convolutional neural networks are widely used in imaging and image recognition. Learning such networks from training data leads to the minimization of a non-convex function. This makes the analysis of standard optimization methods such as variants of (stochastic) gradient descent challenging. In this article we study the simplified setting of linear convolutional networks. We show that the gradient flow (to be interpreted as an abstraction of gradient descent) applied to the empirical risk defined via certain loss functions including the square loss always converges to a critical point, under a mild condition on the training data.

Category
Mathematics:
Optimization and Control