Understanding Flatness in Generative Models: Its Role and Benefits
By: Taehwan Lee , Kyeongkook Seo , Jaejun Yoo and more
Potential Business Impact:
Makes AI art more stable and less buggy.
Flat minima, known to enhance generalization and robustness in supervised learning, remain largely unexplored in generative models. In this work, we systematically investigate the role of loss surface flatness in generative models, both theoretically and empirically, with a particular focus on diffusion models. We establish a theoretical claim that flatter minima improve robustness against perturbations in target prior distributions, leading to benefits such as reduced exposure bias -- where errors in noise estimation accumulate over iterations -- and significantly improved resilience to model quantization, preserving generative performance even under strong quantization constraints. We further observe that Sharpness-Aware Minimization (SAM), which explicitly controls the degree of flatness, effectively enhances flatness in diffusion models even surpassing the indirectly promoting flatness methods -- Input Perturbation (IP) which enforces the Lipschitz condition, ensembling-based approach like Stochastic Weight Averaging (SWA) and Exponential Moving Average (EMA) -- are less effective. Through extensive experiments on CIFAR-10, LSUN Tower, and FFHQ, we demonstrate that flat minima in diffusion models indeed improve not only generative performance but also robustness.
Similar Papers
A Function Centric Perspective On Flat and Sharp Minima
Machine Learning (CS)
Sharpness can make AI smarter and safer.
Flat Minima and Generalization: Insights from Stochastic Convex Optimization
Machine Learning (CS)
Makes computers learn better, even when they're wrong.
When Flatness Does (Not) Guarantee Adversarial Robustness
Machine Learning (CS)
Makes AI less fooled by tricky mistakes.