On the Anisotropy of Score-Based Generative Models
By: Andreas Floros, Seyed-Mohsen Moosavi-Dezfooli, Pier Luigi Dragotti
Potential Business Impact:
Predicts how well AI learns from data.
We investigate the role of network architecture in shaping the inductive biases of modern score-based generative models. To this end, we introduce the Score Anisotropy Directions (SADs), architecture-dependent directions that reveal how different networks preferentially capture data structure. Our analysis shows that SADs form adaptive bases aligned with the architecture's output geometry, providing a principled way to predict generalization ability in score models prior to training. Through both synthetic data and standard image benchmarks, we demonstrate that SADs reliably capture fine-grained model behavior and correlate with downstream performance, as measured by Wasserstein metrics. Our work offers a new lens for explaining and predicting directional biases of generative models.
Similar Papers
SaD: A Scenario-Aware Discriminator for Speech Enhancement
Sound
Makes noisy audio sound clear in any place.
SaD: A Scenario-Aware Discriminator for Speech Enhancement
Sound
Makes noisy voices sound clear in any place.
The Impact of Anisotropic Covariance Structure on the Training Dynamics and Generalization Error of Linear Networks
Machine Learning (Stat)
Data shape helps computers learn better and faster.