Group Diffusion: Enhancing Image Generation by Unlocking Cross-Sample Collaboration
By: Sicheng Mo , Thao Nguyen , Richard Zhang and more
Potential Business Impact:
Makes AI create better pictures by working together.
In this work, we explore an untapped signal in diffusion model inference. While all previous methods generate images independently at inference, we instead ask if samples can be generated collaboratively. We propose Group Diffusion, unlocking the attention mechanism to be shared across images, rather than limited to just the patches within an image. This enables images to be jointly denoised at inference time, learning both intra and inter-image correspondence. We observe a clear scaling effect - larger group sizes yield stronger cross-sample attention and better generation quality. Furthermore, we introduce a qualitative measure to capture this behavior and show that its strength closely correlates with FID. Built on standard diffusion transformers, our GroupDiff achieves up to 32.2% FID improvement on ImageNet-256x256. Our work reveals cross-sample inference as an effective, previously unexplored mechanism for generative modeling.
Similar Papers
Harnessing Diffusion-Generated Synthetic Images for Fair Image Classification
CV and Pattern Recognition
Makes AI fairer by fixing biased training pictures.
Coupled Diffusion Sampling for Training-Free Multi-View Image Editing
CV and Pattern Recognition
Edits pictures from many angles, all matching.
Fitting Image Diffusion Models on Video Datasets
CV and Pattern Recognition
Makes AI create videos that look more real.