Score: 0

Generative Modeling of Weights: Generalization or Memorization?

Published: June 9, 2025 | arXiv ID: 2506.07998v1

By: Boya Zeng , Yida Yin , Zhiqiu Xu and more

Potential Business Impact:

Computers copy old computer brains, not make new ones.

Business Areas:
Simulation Software

Generative models, with their success in image and video generation, have recently been explored for synthesizing effective neural network weights. These approaches take trained neural network checkpoints as training data, and aim to generate high-performing neural network weights during inference. In this work, we examine four representative methods on their ability to generate novel model weights, i.e., weights that are different from the checkpoints seen during training. Surprisingly, we find that these methods synthesize weights largely by memorization: they produce either replicas, or at best simple interpolations, of the training checkpoints. Current methods fail to outperform simple baselines, such as adding noise to the weights or taking a simple weight ensemble, in obtaining different and simultaneously high-performing models. We further show that this memorization cannot be effectively mitigated by modifying modeling factors commonly associated with memorization in image diffusion models, or applying data augmentations. Our findings provide a realistic assessment of what types of data current generative models can model, and highlight the need for more careful evaluation of generative models in new domains. Our code is available at https://github.com/boyazeng/weight_memorization.

Page Count
29 pages

Category
Computer Science:
Machine Learning (CS)