Score: 0

Stochastic Gradients under Nuisances

Published: August 28, 2025 | arXiv ID: 2508.20326v1

By: Facheng Yu , Ronak Mehta , Alex Luedtke and more

Potential Business Impact:

Teaches computers to learn even with tricky, hidden info.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

Stochastic gradient optimization is the dominant learning paradigm for a variety of scenarios, from classical supervised learning to modern self-supervised learning. We consider stochastic gradient algorithms for learning problems whose objectives rely on unknown nuisance parameters, and establish non-asymptotic convergence guarantees. Our results show that, while the presence of a nuisance can alter the optimum and upset the optimization trajectory, the classical stochastic gradient algorithm may still converge under appropriate conditions, such as Neyman orthogonality. Moreover, even when Neyman orthogonality is not satisfied, we show that an algorithm variant with approximately orthogonalized updates (with an approximately orthogonalized gradient oracle) may achieve similar convergence rates. Examples from orthogonal statistical learning/double machine learning and causal inference are discussed.

Page Count
78 pages

Category
Statistics:
Machine Learning (Stat)