BLENDER: Blended Text Embeddings and Diffusion Residuals for Intra-Class Image Synthesis in Deep Metric Learning
By: Jan Niklas Kolf , Ozan Tezcan , Justin Theiss and more
Potential Business Impact:
Makes computer vision
The rise of Deep Generative Models (DGM) has enabled the generation of high-quality synthetic data. When used to augment authentic data in Deep Metric Learning (DML), these synthetic samples enhance intra-class diversity and improve the performance of downstream DML tasks. We introduce BLenDeR, a diffusion sampling method designed to increase intra-class diversity for DML in a controllable way by leveraging set-theory inspired union and intersection operations on denoising residuals. The union operation encourages any attribute present across multiple prompts, while the intersection extracts the common direction through a principal component surrogate. These operations enable controlled synthesis of diverse attribute combinations within each class, addressing key limitations of existing generative approaches. Experiments on standard DML benchmarks demonstrate that BLenDeR consistently outperforms state-of-the-art baselines across multiple datasets and backbones. Specifically, BLenDeR achieves 3.7% increase in Recall@1 on CUB-200 and a 1.8% increase on Cars-196, compared to state-of-the-art baselines under standard experimental settings.
Similar Papers
Harnessing Diffusion-Generated Synthetic Images for Fair Image Classification
CV and Pattern Recognition
Makes AI fairer by fixing biased training pictures.
DiffThinker: Towards Generative Multimodal Reasoning with Diffusion Models
CV and Pattern Recognition
Lets computers "see" and solve visual puzzles better.
Latent Diffusion for Internet of Things Attack Data Generation in Intrusion Detection
Machine Learning (CS)
Makes smart home devices safer from hackers.