Score: 0

Unit-Consistent (UC) Adjoint for GSD and Backprop in Deep Learning Applications

Published: January 15, 2026 | arXiv ID: 2601.10873v1

By: Jeffrey Uhlmann

Potential Business Impact:

Makes AI learn better by fixing its math.

Business Areas:
Ad Exchange Advertising

Deep neural networks constructed from linear maps and positively homogeneous nonlinearities (e.g., ReLU) possess a fundamental gauge symmetry: the network function is invariant to node-wise diagonal rescalings. However, standard gradient descent is not equivariant to this symmetry, causing optimization trajectories to depend heavily on arbitrary parameterizations. Prior work has proposed rescaling-invariant optimization schemes for positively homogeneous networks (e.g., path-based or path-space updates). Our contribution is complementary: we formulate the invariance requirement at the level of the backward adjoint/optimization geometry, which provides a simple, operator-level recipe that can be applied uniformly across network components and optimizer state. By replacing the Euclidean transpose with a Unit-Consistent (UC) adjoint, we derive UC gauge-consistent steepest descent and backprogation.

Page Count
17 pages

Category
Computer Science:
Machine Learning (CS)