On Defining Neural Averaging
By: Su Hyeong Lee, Richard Ngo
Potential Business Impact:
Combines AI models to make them smarter.
What does it even mean to average neural networks? We investigate the problem of synthesizing a single neural network from a collection of pretrained models, each trained on disjoint data shards, using only their final weights and no access to training data. In forming a definition of neural averaging, we take insight from model soup, which appears to aggregate multiple models into a singular model while enhancing generalization performance. In this work, we reinterpret model souping as a special case of a broader framework: Amortized Model Ensembling (AME) for neural averaging, a data-free meta-optimization approach that treats model differences as pseudogradients to guide neural weight updates. We show that this perspective not only recovers model soup but enables more expressive and adaptive ensembling strategies. Empirically, AME produces averaged neural solutions that outperform both individual experts and model soup baselines, especially in out-of-distribution settings. Our results suggest a principled and generalizable notion of data-free model weight aggregation and defines, in one sense, how to perform neural averaging.
Similar Papers
Parameter Averaging in Link Prediction
Machine Learning (CS)
Makes smart computer knowledge better, faster.
Group Averaging for Physics Applications: Accuracy Improvements at Zero Training Cost
Machine Learning (CS)
Makes computer predictions more accurate by using math tricks.
Souper-Model: How Simple Arithmetic Unlocks State-of-the-Art LLM Performance
Computation and Language
Combines smart computer brains for better results.