Score: 2

When Are Two Scores Better Than One? Investigating Ensembles of Diffusion Models

Published: January 16, 2026 | arXiv ID: 2601.11444v1

By: Raphaël Razafindralambo , Rémy Sun , Frédéric Precioso and more

Potential Business Impact:

Makes AI images better by combining different ideas.

Business Areas:
Crowdsourcing Collaboration

Diffusion models now generate high-quality, diverse samples, with an increasing focus on more powerful models. Although ensembling is a well-known way to improve supervised models, its application to unconditional score-based diffusion models remains largely unexplored. In this work we investigate whether it provides tangible benefits for generative modelling. We find that while ensembling the scores generally improves the score-matching loss and model likelihood, it fails to consistently enhance perceptual quality metrics such as FID on image datasets. We confirm this observation across a breadth of aggregation rules using Deep Ensembles, Monte Carlo Dropout, on CIFAR-10 and FFHQ. We attempt to explain this discrepancy by investigating possible explanations, such as the link between score estimation and image quality. We also look into tabular data through random forests, and find that one aggregation strategy outperforms the others. Finally, we provide theoretical insights into the summing of score models, which shed light not only on ensembling but also on several model composition techniques (e.g. guidance).

Country of Origin
🇩🇪 Germany

Repos / Data Links

Page Count
41 pages

Category
Computer Science:
Machine Learning (CS)