Generative Modeling of Weights: Generalization or Memorization?
By: Boya Zeng , Yida Yin , Zhiqiu Xu and more
Potential Business Impact:
Computers copy old computer brains, not make new ones.
Generative models, with their success in image and video generation, have recently been explored for synthesizing effective neural network weights. These approaches take trained neural network checkpoints as training data, and aim to generate high-performing neural network weights during inference. In this work, we examine four representative methods on their ability to generate novel model weights, i.e., weights that are different from the checkpoints seen during training. Surprisingly, we find that these methods synthesize weights largely by memorization: they produce either replicas, or at best simple interpolations, of the training checkpoints. Current methods fail to outperform simple baselines, such as adding noise to the weights or taking a simple weight ensemble, in obtaining different and simultaneously high-performing models. We further show that this memorization cannot be effectively mitigated by modifying modeling factors commonly associated with memorization in image diffusion models, or applying data augmentations. Our findings provide a realistic assessment of what types of data current generative models can model, and highlight the need for more careful evaluation of generative models in new domains. Our code is available at https://github.com/boyazeng/weight_memorization.
Similar Papers
Geometric Flow Models over Neural Network Weights
Machine Learning (CS)
Makes AI learn new tasks with less data.
On the Edge of Memorization in Diffusion Models
Machine Learning (CS)
Helps AI learn without copying its training pictures.
Unconsciously Forget: Mitigating Memorization; Without Knowing What is being Memorized
CV and Pattern Recognition
Stops AI from copying art it learned from.