Score: 0

Complexity of One-Dimensional ReLU DNNs

Published: December 8, 2025 | arXiv ID: 2512.08091v1

By: Jonathan Kogan, Hayden Jananthan, Jeremy Kepner

Potential Business Impact:

Makes AI understand patterns with fewer parts.

Business Areas:
Natural Language Processing Artificial Intelligence, Data and Analytics, Software

We study the expressivity of one-dimensional (1D) ReLU deep neural networks through the lens of their linear regions. For randomly initialized, fully connected 1D ReLU networks (He scaling with nonzero bias) in the infinite-width limit, we prove that the expected number of linear regions grows as $\sum_{i = 1}^L n_i + \mathop{o}\left(\sum_{i = 1}^L{n_i}\right) + 1$, where $n_\ell$ denotes the number of neurons in the $\ell$-th hidden layer. We also propose a function-adaptive notion of sparsity that compares the expected regions used by the network to the minimal number needed to approximate a target within a fixed tolerance.

Page Count
5 pages

Category
Computer Science:
Machine Learning (CS)