Weight Space Representation Learning with Neural Fields
By: Zhuoqian Yang, Mathieu Salzmann, Sabine Süsstrunk
Potential Business Impact:
Makes AI create better pictures and understand data.
In this work, we investigate the potential of weights to serve as effective representations, focusing on neural fields. Our key insight is that constraining the optimization space through a pre-trained base model and low-rank adaptation (LoRA) can induce structure in weight space. Across reconstruction, generation, and analysis tasks on 2D and 3D data, we find that multiplicative LoRA weights achieve high representation quality while exhibiting distinctiveness and semantic structure. When used with latent diffusion models, multiplicative LoRA weights enable higher-quality generation than existing weight-space methods.
Similar Papers
Exploring and Reshaping the Weight Distribution in LLM
Machine Learning (CS)
Makes AI learn better by organizing its parts.
Low-Rank Adaptation of Neural Fields
Graphics
Makes computer images change faster with less data.
On the Internal Representations of Graph Metanetworks
Machine Learning (CS)
Teaches computers to learn from how other computers learned.