When Are Two Scores Better Than One? Investigating Ensembles of Diffusion Models
By: Raphaël Razafindralambo , Rémy Sun , Frédéric Precioso and more
Potential Business Impact:
Makes AI images better by combining different ideas.
Diffusion models now generate high-quality, diverse samples, with an increasing focus on more powerful models. Although ensembling is a well-known way to improve supervised models, its application to unconditional score-based diffusion models remains largely unexplored. In this work we investigate whether it provides tangible benefits for generative modelling. We find that while ensembling the scores generally improves the score-matching loss and model likelihood, it fails to consistently enhance perceptual quality metrics such as FID on image datasets. We confirm this observation across a breadth of aggregation rules using Deep Ensembles, Monte Carlo Dropout, on CIFAR-10 and FFHQ. We attempt to explain this discrepancy by investigating possible explanations, such as the link between score estimation and image quality. We also look into tabular data through random forests, and find that one aggregation strategy outperforms the others. Finally, we provide theoretical insights into the summing of score models, which shed light not only on ensembling but also on several model composition techniques (e.g. guidance).
Similar Papers
How Ensemble Learning Balances Accuracy and Overfitting: A Bias-Variance Perspective on Tabular Data
Machine Learning (CS)
Makes computer predictions more accurate without mistakes.
Score Augmentation for Diffusion Models
Machine Learning (CS)
Makes AI create better pictures with less data.
MAD: Manifold Attracted Diffusion
Machine Learning (Stat)
Makes blurry pictures sharp and clear.