Score: 0

On Defining Neural Averaging

Published: August 20, 2025 | arXiv ID: 2508.14832v1

By: Su Hyeong Lee, Richard Ngo

Potential Business Impact:

Combines AI models to make them smarter.

Business Areas:
Machine Learning Artificial Intelligence, Data and Analytics, Software

What does it even mean to average neural networks? We investigate the problem of synthesizing a single neural network from a collection of pretrained models, each trained on disjoint data shards, using only their final weights and no access to training data. In forming a definition of neural averaging, we take insight from model soup, which appears to aggregate multiple models into a singular model while enhancing generalization performance. In this work, we reinterpret model souping as a special case of a broader framework: Amortized Model Ensembling (AME) for neural averaging, a data-free meta-optimization approach that treats model differences as pseudogradients to guide neural weight updates. We show that this perspective not only recovers model soup but enables more expressive and adaptive ensembling strategies. Empirically, AME produces averaged neural solutions that outperform both individual experts and model soup baselines, especially in out-of-distribution settings. Our results suggest a principled and generalizable notion of data-free model weight aggregation and defines, in one sense, how to perform neural averaging.

Country of Origin
🇺🇸 United States

Page Count
34 pages

Category
Computer Science:
Machine Learning (CS)